* Andrew Morton <[email protected]> wrote:
> Well. The radix-tree approach's best-case is probably quite a lot
> worse than the list-based approach's best-case. It hits a lot more
> cachelines and involves a lot more code.
The list-based approach's best-case are large continuous append writes.
No sorting overhead, and light data structures.
i'd say this workload should be not that bad under the radix tree either
- the gang lookup stuffs a nice vector of 16 pages into an array.
we definitely can say nothing based on the observation that a _single_
page took 1.9 msecs in Lee's previous measurement, while 7700 pages now
take 6 msecs to process.
> But of course the radix-tree's worst-case will be far better than
> list's.
the generic VM/pagecache has proven that the radix tree wins hands down
for alot more workloads than the worst-case.
> And presumably that list-based code rarely hits the worst-case, else
> it would have been changed by now.
that was my other point in a previous mail: most write benchmarks do
continuous append writes, and CPU overhead easily gets lost in network
latency.
Also, considering that a good portion of the NFS client's code is still
running under the BKL one would assume if the BKL hurts performance it
would have been changed by now? ;-)
Ingo
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
[Index of Archives]
[Kernel Newbies]
[Netfilter]
[Bugtraq]
[Photo]
[Stuff]
[Gimp]
[Yosemite News]
[MIPS Linux]
[ARM Linux]
[Linux Security]
[Linux RAID]
[Video 4 Linux]
[Linux for the blind]
[Linux Resources]