Re: Disk defragmenter in Linux

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Tim wrote:
Tim:

But such (static) data doesn't get fragmented, it stays as it was
original written.  It's changing files that become fragmented, and
newly created ones



Mike McCarty:

Er? Perhaps what Tony wrote was in error, but his understanding is
the same as mine. The ext3 tends to fragment files as it writes them.


It would only be fragmenting the files that it writes to, not the ones
already on the disk.  Sure, a fragmented word processor document might
take a bit longer to open (though it'd have to be a large file for you
to notice), but the word processor is going to take just as long to
start up as it ever did.  Likewise with all the other unmodified files
on the drive (most of the OS and applications).  Writing a fragmented
file doesn't shuffle everything else around.

Things like large mail spool files have been about the only thing that
strike me as a fragmentation issue.  Most other files are rather small.


And what you wrote doesn't address the directories, which get appended
to, and presumably fragmented, at the time they are creat


I was under the impression that the directory structure was recorded in
manner that's different from how the files are stored.



Might I observe that the many-partitions layout so often recommended gives you all the disadvantages of a fragmented drive from day one?

Two busy partitions is one too many. In these times of cheap disks and USB2 enclosures, I'd rather have one partition for everything (except maybe /boot and maybe other low-use stuff), and if an upgrade is contemplated, back up /home to a USB drive. At worst, almost anything can be backed up overnight. According to dd, I can backup /dev/hda (80 Gb) to a USB2 disk at 14 Mbytes/sec on my laptop.

Arguably, I should be doing something of the sort regardless. As should anyone with many computers in their care.

fwiw I used to use OS/2 and it was IBM's recommendation that one should not defrag HPFS (which, btw, predates NTFS) partitions because HPFS allocates space in bands (take your pad, divide it into eight columns and you have eight bands) (and so takes an initial performance hit). File expansions are done within the same band where possible, so reducing the impact of further fragmentation. Performance was pretty uniform up to, I think, about 95% full.

Defragging an HPFS drive would involve putting all the files together into a single block, and the chances were good that you'd soon find files occupying extents both inside and outside that block and consequent bad performance.

I've always assumed that, since the algorithms have existed for a long time, that Linux filesystems are also good in that respect. The fact that no defragger is included in populare distros supports my (underinformed) view.

Journalling introduces a complication, but its effect depends on where the journal is. Also, journalling only has an effect when files are written.

Finally, the ultimate defag tool is backup and restore. It might not be necessary, but it won't do any harm either.




--

Cheers
John

-- spambait
1aaaaaaa@xxxxxxxxxxxxxxxxxxxxxxx  Z1aaaaaaa@xxxxxxxxxxxxxxxxxxxxxxx
Tourist pics http://portgeographe.environmentaldisasters.cds.merseine.nu/

do not reply off-list


[Index of Archives]     [Current Fedora Users]     [Fedora Desktop]     [Fedora SELinux]     [Yosemite News]     [Yosemite Photos]     [KDE Users]     [Fedora Tools]     [Fedora Docs]

  Powered by Linux