i'm wondering if i should be surprised about something i just noticed WRT how "du" tries to avoid [re]counting files that it's already seen via hard links. given the following directories that are git repositories that i'm playing with just to see the space savings i might get: $ ls -ld git* drwxrwxr-x 15 rpjday rpjday 12288 2007-11-01 07:53 git drwxrwxr-x 15 rpjday rpjday 12288 2007-11-01 08:23 git.local drwxrwxr-x 15 rpjday rpjday 12288 2007-11-01 08:23 git.nolinks $ the first is the master repo, the second (git.local) was cloned allowing hard links to save space, while the third (git.nolinks) was explicitly cloned without allowing hard links (using --no-hardlinks). if i check their disk usage individually, i get: $ for r in git* ; do > du -s $r > done 26340 git 26292 git.local 26292 git.nolinks $ but if i use wildcards, notice the difference: $ du -s git* 26340 git 9672 git.local 26292 git.nolinks $ as if du already knows which files it's seen under "git" and won't recount them under "git.local" based on hard links. if that's the case, then it won't be surprising to see the numbers on the first two reversed if i explicitly change the order of the arguments: $ du -s git.local git git.nolinks 26292 git.local 9720 git 26292 git.nolinks $ i can see what's happening, i just didn't realize that that's how "du" operated. is that deliberate? rday p.s. i can see the rationale here -- if du *didn't* do something like that, then you might get results that were wildly out of sync with reality. like i said, i just never noticed that before. -- ======================================================================== Robert P. J. Day Linux Consulting, Training and Annoying Kernel Pedantry Waterloo, Ontario, CANADA http://crashcourse.ca ========================================================================