Hi!
> > > I can agree with putting splash screens and userspace stuff in
> > > userspace. Suspend2 has had that too, since March. But the guts of
> > > the
> >
> > Well, I'd say that having to resort to netlink is ... not quite
> > nice. You get all the complexity of having userspace running during
> > suspend, and get very little benefit.
>
> Mmm, but less complexity than with trying to do the whole suspend from
> userspace. (I don't need to export pageflags, bio routines etc or work
> around it by using /dev/kmem).
Well, userland swsusp has pretty low impact on kernel code -- it adds
something like 150 lines:
drivers/char/mem.c | 42 +
include/linux/suspend.h | 23
kernel/power/console.c | 1
kernel/power/disk.c | 19
kernel/power/swsusp.c | 78 +
usr/swsusp-init | 9
8 files changed, 2696 insertions(+), 4 deletions(-)
i don't think you can do much better than that...
> > ...at expense of complexity, and hooks all over the kernel. Yes, if
> > you modify kernel a bit, nothing will use the page cache.
>
> Could you back your "hooks all over the kernel" statement up? I do have
> some BUG_ON()s aimed at double checking that nothing bad happens, but
> they never get hit and obviously aren't required to stop processes using
> the page cache. All that's really required is to freeze processes.
Are you willing to merge the code without BUG_ONs?
> > Anyway, I believe we have solution for that one. See Rafael's recent
> > patches -- "only free as much memory as neccessary" should do the
> > trick, without excessive complexity.
>
> That's still imposing a 1/2 of memory limit, though.
Yes, hopefully users will not notice.
> > Well, I do not want the complexity of two page sets. I think Rafael's
> > patches will provide almost equivalent functionality. Other than that,
> > all your features should be doable. I'm not saying I'm going to write
> > those patches myself, but I'll certainly not reject them just because
> > they are too big.
>
> I'm sorry for making you think that having two pagesets is a complex
> issues. I know that when I first did it, I put tight restrictions on
> memory usage while the first pageset was written and used a separate
> memory pool. Since then, I've realised a far simpler way of handling
> this, and the code has been greatly simplified. In essence, all you need
> to do is make your I/O code generic enough that it can be passed a list
> of pages to write and put page cache pages in a separate list when
> figuring out what pages need to be saved. Then you save those pages
> before doing your atomic copy of the other pages, and reload them after
> restoring the atomic copy at resume time.
Okay, it may have gotten better. Anyway, this is the only part that
really needs to be in-kernel. Saving 50% of memory is still going to
produce *way* more responsive system than "save as little as
possible", and I hope it will be good enough.
Pavel
--
Thanks, Sharp!
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
[Index of Archives]
[Kernel Newbies]
[Netfilter]
[Bugtraq]
[Photo]
[Stuff]
[Gimp]
[Yosemite News]
[MIPS Linux]
[ARM Linux]
[Linux Security]
[Linux RAID]
[Video 4 Linux]
[Linux for the blind]
[Linux Resources]