FC4 hang/interpret slabtop

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I have a FC4 machine which hangs frequently.  

It does a *lot* of logging from a remote syslog.  It can hang at
unpredictable times, but will also usually become unresponsive around the
time when logrotate tries to rotate syslogs.  My suspicion is there is
something wrong somewhere between the kernel and syslogd.

I have tried looking at slabtop to see if things are running out of room 
in the kernel.  I still think this is the case, but can't find any 
authoritative source on increasing them to compensate.

Here is a view of my slabtop sorted by percentage used:
Things don't look right to me: there seem to be a lot of things at 100% 
but my efforts to increase them by writing new values into /proc/slabinfo 
are either wrong or ineffective.  Any thoughts?


[root@syslogger ~]# slabtop
 Active / Total Objects (% used)    : 53671 / 74663 (71.9%)
 Active / Total Slabs (% used)      : 3335 / 3337 (99.9%)
 Active / Total Caches (% used)     : 85 / 123 (69.1%)
 Active / Total Size (% used)       : 9067.80K / 13123.39K (69.1%)
 Minimum / Average / Maximum Object : 0.01K / 0.17K / 128.00K

  OBJS ACTIVE  USE OBJ SIZE  SLABS OBJ/SLAB CACHE SIZE NAME
   150    150 100%    0.12K      5       30        20K kmem_cache
  3164   3164 100%    0.03K     28      113       112K size-32
   195    195 100%    0.25K     13       15        52K size-256
   160    160 100%    1.00K     40        4       160K size-1024
   176    176 100%    2.00K     88        2       352K size-2048
   161    161 100%    4.00K    161        1       644K size-4096
     1      1 100%    8.00K      1        1         8K size-8192
     1      1 100%   16.00K      1        1        16K size-16384
     1      1 100%   32.00K      1        1        32K size-32768
     1      1 100%   64.00K      1        1        64K size-65536
    17     17 100%    4.00K     17        1        68K names_cache
    26     26 100%    0.14K      1       26         4K sigqueue
    77     77 100%    0.50K     11        7        44K sock_inode_cache
   260    260 100%    3.00K    130        2      1040K biovec-(256)
    14     14 100%    0.56K      2        7         8K UDP
   570    570 100%    0.25K     38       15       152K ip_dst_cache
    49     49 100%    0.50K      7        7        28K UNIX
    32     32 100%    0.50K      4        8        16K sgpool-32
    32     32 100%    1.00K      8        4        32K sgpool-64
    32     32 100%    2.00K     16        2        64K sgpool-128
    22     22 100%    0.34K      2       11         8K scsi_cmd_cache
     8      8 100%    2.00K      4        2        16K rpc_buffers
   265    264  99%    1.50K     53        5       424K biovec-128
   434    431  99%    0.53K     62        7       248K shmem_inode_cache
    59     58  98%    4.00K     59        1       236K pgd
   275    272  98%    0.75K     55        5       220K biovec-64
  2944   2884  97%    0.04K     32       92       128K sysfs_dir_cache
   920    896  97%    0.04K     10       92        40K acpi_operand
   280    272  97%    0.19K     14       20        56K biovec-16
  1200   1140  95%    0.12K     40       30       160K size-128
    84     79  94%    1.31K     28        3       112K task_struct
   450    423  94%    0.44K     50        9       200K proc_inode_cache
    81     76  93%    1.28K     27        3       108K sighand_cache
   295    272  92%    0.06K      5       59        20K biovec-4
  2508   2268  90%    0.09K     57       44       228K vm_area_struct




[Index of Archives]     [Current Fedora Users]     [Fedora Desktop]     [Fedora SELinux]     [Yosemite News]     [Yosemite Photos]     [KDE Users]     [Fedora Tools]     [Fedora Docs]

  Powered by Linux