Re: ext3 filesystem performance issues

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



aragonx@xxxxxxxxxx wrote:
I'm wondering at what point ext3 starts having issues with number of files
in a directory.

For instance, will any of the utilities fail (ls, mv, chown etc) if you
have more than x files in a directory?

Command line programs don't actually fail, but the shell has a limit on command line length that will make them appear to fail a file list or wildcard expansion exceeds it.

At what point does things really start slowing down?

I was told by a coworker that all UNIX varieties have to do an ordered
list search when they have to preform any operations on a directory.  They
also stated that if there is more than 100k files in a directory, these
tools would fail.

Things can slow down a lot with large numbers of files. There is an option for ext3 to index the directory which will trade some time creating/deleting names for search speed.

This seems like a low number to me but I was looking for some expert
analysis.  :)

Aside from the time used in all lookups, a real killer is the fact that creating a new file requires an initial scan to see if the name already exists, then another one to find an unused slot to create it, and meanwhile access must be locked so nothing else sees that intermediate state.

--
  Les Mikesell
   lesmikesell@xxxxxxxxx


[Index of Archives]     [Current Fedora Users]     [Fedora Desktop]     [Fedora SELinux]     [Yosemite News]     [Yosemite Photos]     [KDE Users]     [Fedora Tools]     [Fedora Docs]

  Powered by Linux