Re: light weight counters: race free through local_t?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]


Let's put it in another way:
Do the statistics need to be absolutely precise?
I guess they do not.
By the time you can display them, they have already been changed.

Let's take an example:

zone_statistics(struct zonelist *zonelist, struct zone *z)
//	local_irq_save(flags);		// No IRQ lock out
	cpu = smp_processor_id();	// Can become another CPU
	p = &z->pageset[cpu];		// Can count for someone else
	if (pg == orig) {
		stat_incr(&z->pageset[cpu].numa_hit);	// Unsafe
	} else {
//	local_irq_restore(flags);

Where "stat_incr()" is arch. dependent and possibly unsafe routine.

For IA64:

// Unsafe statistics
static inline void stat_incr(int *addr){
       int tmp;

	// Obtain immediately the cache line exclusivity, do not touch L1
       asm volatile ("ld4.bias.nt1 %0=[%1]" : "=r"(tmp) : "r" (addr));
       asm volatile ("st4 [%1] = %0" :: "r"(tmp), "r"(addr) : "memory");

It takes 10 clock cycles.


To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at
Please read the FAQ at

[Index of Archives]     [Kernel Newbies]     [Netfilter]     [Bugtraq]     [Photo]     [Stuff]     [Gimp]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Video 4 Linux]     [Linux for the blind]     [Linux Resources]
  Powered by Linux