Re: Which is simpler? (Was Re: [Suspend2-devel] Re: [ 00/10] [Suspend2] Modules support.)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi.

On Saturday 25 February 2006 10:20, Rafael J. Wysocki wrote:
> Hi,
>
> On Saturday 25 February 2006 00:11, Nigel Cunningham wrote:
> > On Saturday 25 February 2006 06:22, Rafael J. Wysocki wrote:
> > > On Friday 24 February 2006 14:12, Pavel Machek wrote:
> > > > On Pá 24-02-06 11:58:07, Rafael J. Wysocki wrote:
>
> }-- snip --{
>
> > > > > Well, if all of the pages that Nigel saves before snapshot are
> > > > > freeable in theory, there evidently is something wrong with freeing
> > > > > in swsusp, as we have a testcase in which the user was unable to
> > > > > suspend with swsusp due to the lack of memory and could suspend
> > > > > with suspend2.
> > > > >
> > > > > However, the only thing in swsusp_shrink_memory() that may be wrong
> > > > > is we return -ENOMEM as soon as shrink_all_memory() returns 0.
> > > > > Namely, if shrink_all_memory() can return 0 prematurely (ie. "there
> > > > > still are some freeable pages, but they could not be freed in
> > > > > _this_ call"), we should continue until it returns 0 twice in a row
> > > > > (or something like that).  If this doesn't help, we'll have to fix
> > > > > shrink_all_memory() I'm afraid.
> > > >
> > > > I did try shrink_all_memory() five times, with .5 second delay
> > > > between them, and it freed more memory at later tries.
> > >
> > > I wonder if the delays are essential or if so, whether they may be
> > > shorter than .5 sec.
> > >
> > > > Sometimes it even freed 0 pages at the first try.
> > > >
> > > > I did not push the patch because
> > > >
> > > > 1) it was way too ugly
> > >
> > > I think I can do something like that in swsusp_shrink_memory() and it
> > > won't be very ugly.
> > >
> > > > 2) shrink_all_memory() should be fixed. It should not really return
> > > > if there are more pages freeable.
> > >
> > > Well, that would be a long-run solution.  However, until it's fixed we
> > > can use a workaround IMHO. ;-)
> >
> > Isn't trying to free as much memory as you can the wrong solution anyway?
>
> No, it isn't.  There are situations in which we would like to suspend
> "whatever it takes" and then we should be able to free as much memory as
> _really_ possible.
>
> Besides, it is supposed to work, and it doesn't, so it needs fixing.
>
> > I mean, that only means that the poor system has more pages to fault back
> > in at resume time, before the user can even begin to think about doing
> > anything useful.
>
> Well, that's not the only possibility.  After we fix the memory freeing
> issue we can use the observation that page cache pages need not be saved to
> disk during suspend, because they already are in a storage.  We only need
> to create a map of these pages during suspend with the information on where
> to get them from and prefetch them into memory during resume independently
> of the page fault mechanism.
>
> This way we won't have to actually save anything before we snapshot the
> system and the system should be reasonably responsive after resume.

But this is going to be much more complicated than simply saving the pages in 
the first place. You'll need some mechanism for figuring out what pages to 
get, how to fault them in, etc. In addition, it will be much slower than 
simply reading them back from (ideally) contiguous storage.

Regards,

Nigel

Attachment: pgpVoVizUMZ78.pgp
Description: PGP signature


[Index of Archives]     [Kernel Newbies]     [Netfilter]     [Bugtraq]     [Photo]     [Stuff]     [Gimp]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Video 4 Linux]     [Linux for the blind]     [Linux Resources]
  Powered by Linux