Ingo Molnar wrote:
* Nick Piggin <[email protected]> wrote:
I said exactly the same thing last time this came up. I would love to
remove code if its functionality can be adequately replaced by
existing code, but I think your reasons for removing SLOB aren't that
good, and just handwaving away the significant memory savings doesn't
work.
yeah. Also, the decision here is pretty easy: the behavior of the
allocator is not really visible to applications. So this isnt like
having a parallel IO scheduler or a parallel process scheduler (which
cause problems to us by fragmenting the application space) - so the
long-term cost to us kernel maintainers should be relatively low.
Yep.
A year ago the -rt kernel defaulted to the SLOB for a few releases,
and barring some initial scalability issues (which were solved in
-rt) it worked pretty well on generic PCs, so i dont buy the 'it
doesnt work' argument either.
It's actually recently been made to work on SMP, it is much more
scalable to large memories, and some initial NUMA work is happening
that some embedded guys are interested in, all without increasing
static footprint too much, and it has actually decreased dynamic
footprint too.
cool. I was referring to something else: people were running -rt on
their beefy desktop boxes with several gigs of RAM they complained about
the slowdown that is caused by SLOB's linear list walking.
That is what I meant by scalable to large memories. It is not perfect,
but it is much better now. I noticed huge slowdowns too when test booting
the slob RCU patch on my 4GB desktop, so I did a few things to improve
freelist walking as well (the patches are in -mm, prefixed with slob-).
Afterwards, performance seems to be fairly good (obviously not as good
as SLAB or SLUB on such a configuration, but definitely usable and the
desktop was not noticably slower).
Steve Rostedt did two nice changes to fix those scalability problems.
I've attached Steve's two patches. With these in place SLOB was very
usable for large systems as well, with no measurable overhead.
(obviously the lack of per-cpu caching can still be measured, but it's a
lot less of a problem in practice than the linear list walking was.)
Thanks for sending those. One is actually obsolete because we removed
bigblock list completely, however I had not seen the other one. Such an
approach could be used, OTOH, having all allocations come from the same
pool does have its advantages in terms of memory usage.
I don't think it has been quite decided on the next step to take with
SLOB, however I have an idea that if we had per-cpu freelists (where
other lists could be used as a fallback), then that would go a long way
to improving the SMP scalability, CPU cache hotness, and long list
walking issues all at once.
However I like the fact that there is no need for a big rush to improve
it, and so patches and ideas can be brewed up slowly :)
Thanks,
Nick
--
SUSE Labs, Novell Inc.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
[Index of Archives]
[Kernel Newbies]
[Netfilter]
[Bugtraq]
[Photo]
[Stuff]
[Gimp]
[Yosemite News]
[MIPS Linux]
[ARM Linux]
[Linux Security]
[Linux RAID]
[Video 4 Linux]
[Linux for the blind]
[Linux Resources]