Re: ext3 filesystem performance issues

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On Wed, 12 Sep 2007, aragonx@xxxxxxxxxx wrote:

I'm wondering at what point ext3 starts having issues with number of files
in a directory.

For instance, will any of the utilities fail (ls, mv, chown etc) if you
have more than x files in a directory?

At what point does things really start slowing down?

I was told by a coworker that all UNIX varieties have to do an ordered
list search when they have to preform any operations on a directory.  They
also stated that if there is more than 100k files in a directory, these
tools would fail.

This seems like a low number to me but I was looking for some expert
analysis.  :)

Thanks
Will


I don't think that the tools fail, as much as the command length is exceeded. Take for example this test log:

$ time du -sh
9.3G    .

real    0m5.609s
user    0m0.024s
sys     0m0.544s
$ time ls *|wc
bash: /bin/ls: Argument list too long
      0       0       0

real    0m0.578s
user    0m0.527s
sys     0m0.051s
$ time ls|wc
  81000   81000 1376689

real    0m0.652s
user    0m0.541s
sys     0m0.066s

Note also that the filesystem features for the mounted ext3 volume are:

has_journal resize_inode dir_index filetype needs_recovery sparse_super large_file

ed


[Index of Archives]     [Current Fedora Users]     [Fedora Desktop]     [Fedora SELinux]     [Yosemite News]     [Yosemite Photos]     [KDE Users]     [Fedora Tools]     [Fedora Docs]

  Powered by Linux