linux-os (Dick Johnson) wrote:
On Thu, 15 Dec 2005, Lee Revell wrote:
On Thu, 2005-12-15 at 14:46 -0700, Jeff V. Merkey wrote:
Lee Revell wrote:
On Thu, 2005-12-15 at 14:07 -0700, Jeff V. Merkey wrote:
When you are on the phone with an irrate customer at 2:00 am in the
morning, and just turning off your broken 4K stack fix
and getting the customer running matters.
Bugzilla link please. Otherwise STFU.
??????
Jeff
You imply that your customer's problem was due to a kernel bug triggered
by CONFIG_4KSTACKS. I am asking you to provide a link to the bug report
or get lost.
Lee
Throughout the past two years of 4k stack-wars, I never heard why
such a small stack was needed (not wanted, needed). It seems that
everybody "knows" that smaller is better and most everybody thinks
that one page in ix86 land is "optimum". However I don't think
anybody ever even tried to analyze what was better from a technical
perspective. Instead it's been analyzed as religious dogma, i.e.,
keep the stack small, it will prevent idiots from doing bad things.
I'm fairly sure that if you started from scratch and decided to
write a new operating system, your choice of a stack-size would
probably be something like 64k. I have no clue why somebody
decided to use a 4k stack and force their choice upon others.
And, yes, I am well aware that each system-call requires a
seperate stack upon entry and it even needs to keep that stack
while sleeping.
No. If writing an os from scratch, then using 4k stacks would
be absolutely trivial. The problems now happens only because
we're switching away from the 8k stacks people were used to having.
Design with 4k stacks from the start, and you'll see people writing
code with the assumption that they can't stick more than a handful
of ints/pointers on the stack.
If you design with 64k stacks then people _use_ that memory, and
soon you hear someone wanting 128k stacks to be "safe". That
way you end up with windows. Note that 64k is 16 pages, and they
have to be _consecutive_. It don't take much fragmentation before
you can't get that many consecutive pages - you can easily have 3/4 of your
memory unused, ready to be taken, but still be unable to get 16
consecutive pages.
To see this - create a small kernel module that tries to allocate 64k
of consecutive memory when loaded. Try loading it after a few days
of normal use, and see how often your server fails to do it.
Helge Hafting
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
[Index of Archives]
[Kernel Newbies]
[Netfilter]
[Bugtraq]
[Photo]
[Stuff]
[Gimp]
[Yosemite News]
[MIPS Linux]
[ARM Linux]
[Linux Security]
[Linux RAID]
[Video 4 Linux]
[Linux for the blind]
[Linux Resources]