Re: [patch 3/4] net: Percpufy frequently used variables -- proto.sockets_allocated

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Ravikiran G Thirumalai <[email protected]> wrote:
>
> On Fri, Jan 27, 2006 at 12:16:02PM -0800, Andrew Morton wrote:
> > Ravikiran G Thirumalai <[email protected]> wrote:
> > >
> > > which can be assumed as not frequent.  
> > > At sk_stream_mem_schedule(), read_sockets_allocated() is invoked only 
> > > certain conditions, under memory pressure -- on a large CPU count machine, 
> > > you'd have large memory, and I don't think read_sockets_allocated would get 
> > > called often.  It did not atleast on our 8cpu/16G box.  So this should be OK 
> > > I think.
> > 
> > That being said, the percpu_counters aren't a terribly successful concept
> > and probably do need a revisit due to the high inaccuracy at high CPU
> > counts.  It might be better to do some generic version of vm_acct_memory()
> > instead.
> 
> AFAICS vm_acct_memory is no better.  The deviation on large cpu counts is the 
> same as percpu_counters -- (NR_CPUS * NR_CPUS * 2) ...

I suppose so.  Except vm_acct_memory() has

	#define ACCT_THRESHOLD  max(16, NR_CPUS * 2)

But if we were to perform similar tuning to percpu_counter, yes, they're
pretty similar.

Oh, and because vm_acct_memory() is counting a singleton object, it can use
DEFINE_PER_CPU rather than alloc_percpu(), so it saves on a bit of kmalloc
overhead.


> > 
> > If the benchmarks say that we need to.  If we cannot observe any problems
> > in testing of existing code and if we can't demonstrate any benefit from
> > the patched code then one option is to go off and do something else ;)
> 
> We first tried plain per-CPU counters for memory_allocated, found that reads
> on memory_allocated was causing cacheline transfers, and then
> switched over to batching.  So batching reads is useful.  To avoid
> inaccuracy, we can maybe change percpu_counter_init to:
> 
> void percpu_counter_init(struct percpu_counter *fbc, int maxdev)
> 
> the percpu batching limit would then be maxdev/num_possible_cpus.  One would
> use batching counters only when both reads and writes are frequent.  With
> the above scheme, we would go fetch cachelines from other cpus for read
> often only on large cpu counts, which is not any worse than the global
> counter alternative, but it would still be beneficial on smaller machines,
> without sacrificing a pre-set deviation.  
> 
> Comments?

Sounds sane.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

[Index of Archives]     [Kernel Newbies]     [Netfilter]     [Bugtraq]     [Photo]     [Stuff]     [Gimp]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Video 4 Linux]     [Linux for the blind]     [Linux Resources]
  Powered by Linux