Hi,
On Friday 24 February 2006 00:44, Pavel Machek wrote:
> > > > > [Because pagecache is freeable, anyway, so it will be freed. Now... I
> > > > > have seen some problems where free_some_memory did not free enough,
> > > > > and schedule()/retry helped a bit... that probably should be fixed.]
> > > >
> > > > It seems I need to understand correctly what the difference between what
> > > > we do and what Nigel does is. I thought the Nigel's approach was to save
> > > > some cache pages to disk first and use the memory occupied by them to
> > > > store the image data. If so, is the page cache involved in that or
> > > > something else?
> > >
> > > I believe Nigel only saves pages that could have been freed anyway
> > > during phase1. Nigel, correct me here... suspend2 should work on same
> > > class of machines swsusp can, but will be able to save caches on
> > > machines where swsusp can not save any.
> >
> > I'm not used to thinking in these terms :). It would be normally be right,
> > except that there will be some LRU pages that will never be freed. These
> > would allow suspend2 to work in some (not many) cases where swsusp can't.
> > It's been ages since I did the intensive testing on the image preparation
> > code, but I think that if we free as much memory as we can, we will always
> > still have at least a few hundred LRU pages left. That's not much, but on
> > machines with less ram, it might make the difference in a greater percentage
> > of cases (compared to machines with more ram)?
>
> Well, pages in LRU should be user pages, and therefore freeable,
> AFAICT. It is possible that there's something wrong with freeing in
> swsusp1...
Well, if all of the pages that Nigel saves before snapshot are freeable in
theory, there evidently is something wrong with freeing in swsusp, as we
have a testcase in which the user was unable to suspend with swsusp due
to the lack of memory and could suspend with suspend2.
However, the only thing in swsusp_shrink_memory() that may be wrong
is we return -ENOMEM as soon as shrink_all_memory() returns 0.
Namely, if shrink_all_memory() can return 0 prematurely (ie. "there still are
some freeable pages, but they could not be freed in _this_ call"), we should
continue until it returns 0 twice in a row (or something like that). If this
doesn't help, we'll have to fix shrink_all_memory() I'm afraid.
Greetings,
Rafael
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
[Index of Archives]
[Kernel Newbies]
[Netfilter]
[Bugtraq]
[Photo]
[Stuff]
[Gimp]
[Yosemite News]
[MIPS Linux]
[ARM Linux]
[Linux Security]
[Linux RAID]
[Video 4 Linux]
[Linux for the blind]
[Linux Resources]