* Andy Nelson <[email protected]> wrote:
> The problem is a different configuration of particles, and about 2
> times bigger (7Million) than the one in comp.arch (3million I think).
> I would estimate that the data set in this test spans something like
> 2-2.5GB or so.
>
> Here are the results:
>
> cpus 4k pages 16m pages
> 1 4888.74s 2399.36s
> 2 2447.68s 1202.71s
> 4 1225.98s 617.23s
> 6 790.05s 418.46s
> 8 592.26s 310.03s
> 12 398.46s 210.62s
> 16 296.19s 161.96s
interesting, and thanks for the numbers. Even if hugetlbs were only
showing a 'mere' 5% improvement, a 5% _user-space improvement_ is still
a considerable improvement that we should try to achieve, if possible
cheaply.
the 'separate hugetlb zone' solution is cheap and simple, and i believe
it should cover your needs of mixed hugetlb and smallpages workloads.
it would work like this: unlike the current hugepages=<nr> boot
parameter, this zone would be useful for other (4K sized) allocations
too. If an app requests a hugepage then we have the chance to allocate
it from the hugetlb zone, in a guaranteed way [up to the point where the
whole zone consists of hugepages only].
the architectural appeal in this solution is that no additional
"fragmentation prevention" has to be done on this zone, because we only
allow content into it that is "easy" to flush - this means that there is
no complexity drag on the generic kernel VM.
can you think of any reason why the boot-time-configured hugetlb zone
would be inadequate for your needs?
Ingo
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
[Index of Archives]
[Kernel Newbies]
[Netfilter]
[Bugtraq]
[Photo]
[Stuff]
[Gimp]
[Yosemite News]
[MIPS Linux]
[ARM Linux]
[Linux Security]
[Linux RAID]
[Video 4 Linux]
[Linux for the blind]
[Linux Resources]