Re: [PATCH][RFC] 4K stacks default, not a debug thing any more...?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Jul 16, 2007 at 06:27:55PM -0500, Matt Mackall wrote:
> So it's absolutely no help in fixing our order-1 allocation problem
> because we don't want to force large pages on people.

Using kmalloc(8k) instead of alloc_page() doesn't sound a too big deal
and that will solve the problem. The whole idea is to avoid the memcpy
+ pte mangling of defrag while hopefully lowering cpu utilization in
allocations at the same time.

About 4k stacks I was generally against them, much better to fail in
fork than to risk corruption. The per-irq stack part is great feature
instead (too bad it wasn't enabled for the safer 8k stacks).

Failing in a do_no_page with variable order page size allocation is a
fatal event (the task will be killed), failing in fork is graceful,
userland can retry etc... Fork can fail for different reasons, ulimit
itself is the most likely source of fork failures. I don't think the
8k stacks have ever been a problem, yes you will run out of stack
sooner (sooner also because the 4k stacks takes less memory) but
nothing is terribly wrong if the 8k allocation fails.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

[Index of Archives]     [Kernel Newbies]     [Netfilter]     [Bugtraq]     [Photo]     [Stuff]     [Gimp]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Video 4 Linux]     [Linux for the blind]     [Linux Resources]
  Powered by Linux