Hi Bharata,
On Tue, Oct 04, 2005 at 07:06:35PM +0530, Bharata B Rao wrote:
> Marcelo,
>
> Here's my next attempt in breaking the "slabs_scanned" from /proc/vmstat
> into meaningful per cache statistics. Now I have the statistics counters
> as percpu. [an issue remaining is that there are more than one cache as
> part of mbcache and they all have a common shrinker routine and I am
> displaying the collective shrinker stats info on each of them in
> /proc/slabinfo ==> some kind of duplication]
Looks good to me! IMO it should be a candidate for -mm/mainline.
Nothing useful to suggest on the mbcache issue... sorry.
> With this patch (and my earlier dcache stats patch) I observed some
> interesting results with the following test scenario on a 8cpu p3 box:
>
> - Ran an application which consumes 40% of the total memory.
> - Ran dbench on tmpfs with 128 clients twice (serially).
> - Ran a find on a ext3 partition having ~9.5million entries (files and
> directories included)
>
> At the end of this run, I have the following results:
>
> [root@llm09 bharata]# cat /proc/meminfo
> MemTotal: 3872528 kB
> MemFree: 1420940 kB
> Buffers: 714068 kB
> Cached: 21536 kB
> SwapCached: 2264 kB
> Active: 1672680 kB
> Inactive: 637460 kB
> HighTotal: 3014616 kB
> HighFree: 1411740 kB
> LowTotal: 857912 kB
> LowFree: 9200 kB
> SwapTotal: 2096472 kB
> SwapFree: 2051408 kB
> Dirty: 172 kB
> Writeback: 0 kB
> Mapped: 1583680 kB
> Slab: 119564 kB
> CommitLimit: 4032736 kB
> Committed_AS: 1647260 kB
> PageTables: 2248 kB
> VmallocTotal: 114680 kB
> VmallocUsed: 1264 kB
> VmallocChunk: 113384 kB
> nr_dentries/page nr_pages nr_inuse
> 0 0 0
> 1 5 2
> 2 12 4
> 3 26 9
> 4 46 18
> 5 76 40
> 6 82 47
> 7 91 59
> 8 122 93
> 9 114 102
> 10 142 136
> 11 138 185
> 12 118 164
> 13 128 206
> 14 126 208
> 15 120 219
> 16 136 261
> 17 159 315
> 18 145 311
> 19 179 379
> 20 192 407
> 21 256 631
> 22 286 741
> 23 316 816
> 24 342 934
> 25 381 1177
> 26 664 2813
> 27 0 0
> 28 0 0
> 29 0 0
> Total: 4402 10277
> dcache lru: total 75369 inuse 3599
>
> [Here,
> nr_dentries/page - Number of dentries per page
> nr_pages - Number of pages with given number of dentries
> nr_inuse - Number of inuse dentries in those pages.
> Eg: From the above data, there are 26 pages with 3 dentries each
> and out of 78 total dentries in these 3 pages, 9 dentries are in use.]
>
> [root@llm09 bharata]# grep shrinker /proc/slabinfo
> # name <active_objs> <num_objs> <objsize> <objperslab> <pagesperslab> : tunables <limit> <batchcount> <sharedfactor> : slabdata <active_slabs> <num_slabs> <sharedavail> : shrinker stat <nr requested> <nr freed>
> ext3_xattr 0 0 48 78 1 : tunables 120 60 8 : slabdata 0 0 0 : shrinker stat 0 0
> dquot 0 0 160 24 1 : tunables 120 60 8 : slabdata 0 0 0 : shrinker stat 0 0
> inode_cache 1301 1390 400 10 1 : tunables 54 27 8 : slabdata 139 139 0 : shrinker stat 682752 681900
> dentry_cache 82110 114452 152 26 1 : tunables 120 60 8 : slabdata 4402 4402 0 : shrinker stat 1557760 760100
>
> [root@llm09 bharata]# grep slabs_scanned /proc/vmstat
> slabs_scanned 2240512
>
> [root@llm09 bharata]# cat /proc/sys/fs/dentry-state
> 82046 75369 45 0 3599 0
> [The order of dentry-state o/p is like this:
> total dentries in dentry hash list, total dentries in lru list, age limit,
> want_pages, inuse dentries in lru list, dummy]
>
> So, we can see that with low memory pressure, even though the
> shrinker runs on dcache repeatedly, not many dentries are freed
> by dcache. And dcache lru list still has huge number of free
> dentries.
The success/attempt ratio is about 1/2, which seems alright?
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
[Index of Archives]
[Kernel Newbies]
[Netfilter]
[Bugtraq]
[Photo]
[Stuff]
[Gimp]
[Yosemite News]
[MIPS Linux]
[ARM Linux]
[Linux Security]
[Linux RAID]
[Video 4 Linux]
[Linux for the blind]
[Linux Resources]