Re: hibernation/snapshot design [was Re: [PATCH] Remove process freezer from suspend to RAM pathway]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, 9 Jul 2007, Pavel Machek wrote:

On Sun 2007-07-08 16:20:46, [email protected] wrote:
On Mon, 9 Jul 2007, Pavel Machek wrote:

Actaully, I'm perfectly fine with that, as long as each task blocked by
the
driver due to suspend has PF_FROZEN (or something similar) set.  Then,
at
least theoretically, we'll be able to drop the freezer from the suspend
code
path and move it after device_suspend() (or the hibernation-specific
equivalent) for hibernation (in that case there shouldn't be a problem
with
any task waiting on I/O while the freezer is running ;-)).

I don't see the need for a freezer for snapshot but that's a different
issue. (stop_machine looks good enough to me).

Freezer is not needed for snapshot -- it is needed so that we can
write out the snapshot to disk without the need for special
drivers/block/simple-ide-for-suspend.c. (We are doing snapshot, then
write to disk from userland code in uswsusp).

instead of trying to freeze most of the system, could you do something
like start a virtual machine sandbox to write the data out, and not let
any userspace other then the sandbox operate?

you would need to throw away disk buffers so that you don't mix current
pending I/O with I/O from the sandbox, and this would be a visable change
for how suspend is setup, but wouldn't this work?

It feels kind of expensive, but yes, we could use another kernel for
doing the dump. Kdump people are using that. We could use hypervisor
for doing the dump. Xen people are doing that. (But I do not think any
of those solutions is suitable for "lets hibernate my notebook" case).

expensive and reliable beats efficiant and unrelaible.

why do you say that neither would work for the "lets hibernate my
notebook" case?

Both would work. One would eat 8-64MB of your RAM, permanently; second
would eat 5-15% of your cpu, permanently. Not very suitable.

how much overlap is there between the two approaches? are they close enough to be able to give the user the choice of which to use depending on their machine (new machines with the hardware virtualization support may want the hypervisor, other hardware may want to sacrafice 8M of ram)

Who says current solution is unreliable?

users report problems, suspend* developers repeatedly state that the problems are that the rest of the kernel needs to be fixed to work properly with the existing approach.

I think it's safe to say that it doesn't work in the general case, even though it does work in some specific cases.

David Lang
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

[Index of Archives]     [Kernel Newbies]     [Netfilter]     [Bugtraq]     [Photo]     [Stuff]     [Gimp]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Video 4 Linux]     [Linux for the blind]     [Linux Resources]
  Powered by Linux