On 06Oct2005 20:10, Mike McCarty <mike.mccarty@xxxxxxxxxxxxx> wrote: | Today, I looked at my disc free space, after deleting some files. | I found that, after deleting approx. 28M of files, that df reported | the disc as being 93% full. Well, the last time I tried looking, | it was 85% full, just a couple of days ago. [...] | I searched and searched for where the space was hiding, and could | not find it. I was comparing with the output from the earlier | du -s /some/path/* | sort -gr | head, and couldn't find it. [...] | Eventually, I rebooted. Now du thinks that my disc is 84% full. [...] | At 93%, it must have been about 7098935 blocks used. How did a | reboot free up 1017187 blocks? It is possible that some process had a file (or files) open, which had been unlinked. Unlinking a file (i.e. rm) only removes the name. The storage associated with the file remains in use until everyone who has it open lets go. Example: # dispatch a "tail" in the background, writing to a file tail -f /var/log/messages > my-copy-of-the-log & # remove the file's name rm my-copy-of-the-log Although you have removed the file, "tail" still has it open and so its storage is not deallocated. Indeed, tail may continue to write to it, consuming space, and you will never see it with "du" because the file no longer has a filename. Killing the tail will release the space. Of course a reboot will accomplish that. The classic example is log rotation; perhaps some daemon has not let go of some large log file that has been removed. Now, on the off chance that it is _not_ the above scenario, here is my "dudiff" script I wrote several years ago. I used to run "du -a" daily and save the outputs. Given two "du -a" output files, running this: dudiff du-a.old-file di-a.newer-file will show you what grew or shrank. Dudiff is here: http://www.cskk.ezoshosting.com/cs/css/bin/dudiff Cheers, -- Cameron Simpson <cs@xxxxxxxxxx> DoD#743 http://www.cskk.ezoshosting.com/cs/ Life is like a sandwich, the more you put in, the better it tastes. - Steve B. Hill, <ccsbh@xxxxxxxxxxxxxxxxxx>