Re: [rfc][patch] dynamic resizing dentry hash using RCU

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> From: William Lee Irwin III <[email protected]>
> Date: Sat, 24 Feb 2007 14:56:31 -0800
>> just do it on a per-directory basis so you don't intermix children
>> of different parents in some boot-time -allocated global trainwreck
>> and you're home free.  Benchmarking is probably needed to gauge
>> which performs best.

On Sat, Feb 24, 2007 at 04:56:31PM -0800, David Miller wrote:
> The original dentry implementation in the kernel did things
> per-directory and it sucked.
> The problem you run into is that you end up with recursive algorithms
> all over the place, which chew up and overflow the kernel stack.
> So it would be a regression to go to a per-directory type
> lookup data structure for dentries.

Recursive code is clearly inadmissible in kernel environments. Any
implementations would clearly be required to be coded iteratively.
It's unclear to me where the choice of data structure would make a
demand for recursive code, but wherever it ostensibly does so, an
explicitly-allocated stack (or queue or worklist of whatever sort)
should be able to unravel it. There are furthermore rather stringent
bounds on balanced tree heights due to address space limits, so
whatever explicitly-allocated stacks may be required for e.g. tree
balancing algorithms are tightly bounded in depth.

The usual objection is to lexicographically ordered balanced trees
such as B+ trees on the grounds that string comparison is cache-
intensive. Hash tries resolve that concern, albeit at the amortized
cost of expansion and contraction of what are effectively nodes in
a radix tree with a large radix as well as undoing path compression
where required (speaking of which, lib/radix-tree.c should really do
path compression instead of the ->depth hack). The quest, as I see it,
is for self-sizing data structures with as little restructuring and
direct key (string) comparison as possible. Should the obvious
candidates not incur significant overhead on account of where they
do such, maybe they're good enough.

So I'm not sure what else to say as far as the per-directory lookup
structure commentary goes. There's no way I'd propose putting
recursive code in the kernel. I don't believe any of what I'm talking
about requires such. I'm aware of various drawbacks the algorithms
have, and have in mind that they should pass benchmark criteria in
addition to reducing kernel memory consumption in order to prove that
they don't overwhelm the putative benefits. So I think I've got all the
bases covered.

Am I off-base on any of this? Is there anything I missed?


-- wli

(P.S.: The only time I ever proposed putting something "recursive"
in the kernel, it used a flag to limit the recursion depth to two
stack frames. It was for the purpose of simplifying the reservation
of metadata in an allocator, and so not strictly necessary. A 
simplified metadata allocator could've easily been devised, albeit at
the cost of some code duplication.)
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

[Index of Archives]     [Kernel Newbies]     [Netfilter]     [Bugtraq]     [Photo]     [Stuff]     [Gimp]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Video 4 Linux]     [Linux for the blind]     [Linux Resources]
  Powered by Linux