On Tue, Feb 21, 2006 at 02:30:00PM +1100, Nick Piggin wrote:
> Ravikiran G Thirumalai wrote:
> >Following change places each element of the futex_queues hashtable on a
> >different cacheline. Spinlocks of adjacent hash buckets lie on the same
> >cacheline otherwise.
> >
>
> It does not make sense to add swaths of unused memory into a hashtable for
> this purpose, does it?
I don't know if having two (or more) spinlocks on the same cacheline is a good
idea. Right now, on a 128 B cacheline we have 10 spinlocks on the
same cacheline here!! Things get worse if two futexes from different nodes
hash on to adjacent, or even nearly adjacent hash buckets.
>
> For a minimal, naive solution you just increase the size of the hash table.
> This will (given a decent hash function) provide the same reduction in
> cacheline contention, while also reducing collisions.
Given a decent hash function. I am not sure the hashing function is smart
enough as of now. Hashing is not a function of nodeid, and we have some
instrumentation results which show hashing on NUMA is not good as yet, and
there are collisions from other nodes onto the same hashbucket; Nearby
buckets have high hit rates too.
I think some sort of NUMA friendly hashing, where futexes from same nodes
hash onto a node local hash table, would be a decent solution here.
As I mentioned earlier, we are working on that, and we can probably allocate
the spinlock from nodelocal memory then and avoid this bloat.
We are hoping to have this as a stop gap fix until we get there.
Thanks,
Kiran
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
[Index of Archives]
[Kernel Newbies]
[Netfilter]
[Bugtraq]
[Photo]
[Stuff]
[Gimp]
[Yosemite News]
[MIPS Linux]
[ARM Linux]
[Linux Security]
[Linux RAID]
[Video 4 Linux]
[Linux for the blind]
[Linux Resources]