At 10:06 AM -0400 7/18/05, Stephen Tweedie wrote: >Hi, > >On Sat, Jul 16, 2005 at 03:49:04PM -0400, Tony Nelson wrote: > >> Still, the timings above support the idea that lots of files make adding >> and listing slow, and presumably opening a file would be like listing, so >> keeping the number of files per directory modest is probably a good idea. >> Maybe I'll try that experiment as well. > >Make sure you're using "htree" directory indexing. For new >filesystems, that means using the "-O dir_index" option. For existing >filesystems, "tune2fs -O dir_index" then "e2fsck -fD" to index the >existing directories (do this offline, of course!). In fact, I was. A little more thought provided the answer: creating a file requires searching for the file to make sure it isn't a duplicate name. If I had been testing on list-based dirs, I would have seen O(n) per file created instead of O(1), so that alone proves that I wasn't testing list-based dirs. Indeed, I re-ran the tests on a volume with dir_index off and posted the (much worse) results. :( I'm sorry that I won't be here for any reply, but I'll get it next week. :) ____________________________________________________________________ TonyN.:' <mailto:tonynelson@xxxxxxxxxxxxxxxxx> ' <http://www.georgeanelson.com/>