At 8:30 AM -0400 4/11/07, Mark Haney wrote: >Les Mikesell wrote: >> Alan Cox wrote: >>>> I started a du on my 250 gig backuppc partition when I read the first >>>> message in this thread. It hasn't finished yet. It doesn't have a >>>> single very large directory - just very many filenames. I thought >>>> this thread was about du taking a long time to complete, not just one >>>> kind of layout that exhibits the problem. >>> >>> Ok That is interesting and since its not a single very large directory >>> ought not to be happening >>> >>> Your backuppc partition is on what kind of media and file system ? >> >> It's a reiserfs created several years ago on RAID1/IDE drives. At the >> time it seemed much faster at creating/deleting files than the other >> filesystem choices. The status for the hashed/pooled filename directory >> says: >> Pool is 162.22GB comprising 2576727 files and 4369 directories >> But there are many more hardlinks into this pool representing the tree >> structure of each of the machines for each backup run. >> >> If the du ever finishes I'll get a count of actual filenames. >> > >You know, I have a lot of hardlinks in my filesystem as well. I wonder >if that's some of the problem on my end, too. I see that du has a --count-links option to "count sizes many times", which I suspect means that by default du tries to track hard link usage and count each the storage only once. I haven't looked at the source code. If du is tracking a lot of hard links, it might be using a lot of memory -- I'd think it would have to store the inode number for each multi-linked file. I don't know what it does with hard-linked directories (but don't use hard-linked directories!). -- ____________________________________________________________________ TonyN.:' <mailto:tonynelson@xxxxxxxxxxxxxxxxx> ' <http://www.georgeanelson.com/>