Adrian Bunk wrote:
On Tue, Nov 15, 2005 at 11:46:30AM -0500, Giridhar Pemmasani wrote:
Arjan van de Ven wrote:
the same as 2.4 effectively. 2.6 also has (and I wish it becomes "had"
soon) an option to get 6Kb effective stack space instead. This is an
increase of 2Kb compared to 2.4.
It has been asked couple of times before in this context and no one cared to
answer:
Using 4k stacks may have advantages, but what compelling reasons are there
to drop the choice of 4k/8k stacks? You can make 4k stacks default, but why
throw away the option of 8k stacks, especially since the impact of this
option on the kernel implementation is very little?
One important point is to get remaining problems reported:
All the known issues in e.g. xfs, dm or reiser4 should have been
addressed.
But how many issues have never been reported because people noticed that
disabling CONFIG_4KSTACKS fixed the problem for them and therefore
didn't report it?
I experienced something similar with my patch to schedule OSS drivers
with ALSA replacements for removal - when someone reported he needed an
OSS driver for $reason I asked him for bug numbers in the ALSA bug
tracking system - and the highest number were 4 new bugs against one
ALSA driver.
Unconditionally enabling 4k stacks is the only way to achieve this.
The problem is that you persist in saying "the only way to achieve this"
without admiting that some people are questioning the need to run in 4k
stacks. The only argument I have seen for 4k stacks is that memory is
allocated in 4k blocks and there might not be 8k contiguous available.
When that's true the system is probably in deep trouble on memory anyway.
As someone pointed out using a larger memory allocation block (ie.
multiple of hardware minimum page) would avoid the fragmentation, make
all the bitmaps smaller, and generally have minimal effect either way on
memory use. And you could make the stack size the memory allocation
block size and never have to do conversions. Then the allocation size
could be anything reasonable, from 4k to 32k as mentioned recently.
Given the memory size of typical computers today, saving a few K per
process matters as much as a beer fart in a cow barn.
Do all other non-x86 platforms use 4k stacks? Then why is it such a big
thing to do it as the only choice for x86?
It seems like a lot of effort is being spent making things run in 4k
stacks, with minimal consideration of what benefits are gained or if
there are other ways to gain them. It just feels as though it's being
done to prove it's possible. Linux is about choice, let's go back to that.
--
-bill davidsen ([email protected])
"The secret to procrastination is to put things off until the
last possible moment - but no longer" -me
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
[Index of Archives]
[Kernel Newbies]
[Netfilter]
[Bugtraq]
[Photo]
[Stuff]
[Gimp]
[Yosemite News]
[MIPS Linux]
[ARM Linux]
[Linux Security]
[Linux RAID]
[Video 4 Linux]
[Linux for the blind]
[Linux Resources]