On Tuesday, Apr 10th 2007 at 17:26 +0100, quoth Alan Cox: =>> this is exactly my problem I believe. I have a 4TB RAID SAN and the =>> problem directory holds well over a million small files. Even a =>> directory under that directory, du takes almost 40 minutes to total up 74GB. => =>The directory code in ext2 and ext3 isn't designed for a million files in =>a single directory. Its a bizzare corner case. If you split the files =>into subdirectories (eg by a hash) you'll get far saner numbers >From way back in the cobwebs of history, I seem to remember something magic about the number 188 (I could be wrong about the number). At that point, if a directory crossed over to having that number of files, then the directory would go from 1 to 2 blocks. Of course, the number of blocks would never go down even if you deleted the excess files. I'm curious if that's still true under ext{2,3} and what the number of files it takes to cause blocks to be added. Anyone? -- Time flies like the wind. Fruit flies like a banana. Stranger things have .0. happened but none stranger than this. Does your driver's license say Organ ..0 Donor?Black holes are where God divided by zero. Listen to me! We are all- 000 individuals! What if this weren't a hypothetical question? steveo at syslang.net