Re: Disk defragmenter in Linux

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Tony Nelson wrote:
At 2:32 PM +0700 12/23/05, Fajar Priyanto wrote:

Hi all,
A today's mail from one of the new comer brings me this question again in my
mind. Linux disk defragmenter. Does it really not needed?

I've been googling around and find that this matter has been discussed as
early as 1998. And it seems that the only distro that provides a defragmenter
program is debian.

There are several way of fixing a heavy defragmented disk in Linux, but the
easiest way is to copy all of the content of the partition into another
place, completely erase that partition, and copy back the content.

My own experience shows me just that. My /home partition was almost full with
only 2% freespace. During that time, my Kmail became very slow such as when
downloading email or when I moved between mail folders. The harddisk was just
spinning all the time.

Then I copy all my files and mails from the /home partition and move them all
to another partition. Then delete them from /home. After that, I copied some
of the files and mail back to /home in order to keep 20% of /home free. So
far the performance is ok.

However, still the question remains. If Linux ext3 doesn't need defragmenter,
and able to defrag itself, what is the process name? And when does it run?
Can I see it in action? Is there an utility to see on what percentage my
current defragmentation? I tried fschk but no luck.


The opinion that EXT2 doesn't need defragmenting is based on only a
filesystem-level view of the problem, and doesn't consider data read and
write performance.  EXT2 does make an effort to keep data only a short seek
away ("clustered").  With this clustering, the filesystem operations of
adding, removing, extending, and shortening files are not much affected by
fragmentation.

With EXT3 (journalling), which always writes data to a new place, updates
the filesystem info, and then frees the old data (roughly speaking),
fragmentation is a way of life, and there isn't much to be done about it.
Clustering helps by keeping the seeks relatively short, if there is space
nearby.

I've heard this argument before, quite a few times in fact. It ignores
one big fact of life with regards to discs. Almost all the data on
my disc are *static*. They don't change. And so having files (like
/bin/ls, for example) be contiguous saves enormously when they are
read. Even if most of the data on one's disc is not static, still
quite a bit of it is, or should be. (I'll omit to discuss the
abominable prelink for the moment.)

When you have only 2% free, it's just about certain that the free space is
a long way away from the rest of the data in a file.  Just deleting to get
20% free would probably have fixed your problem.

Absolutely. Running with just that much free on a Linux system is
insanity, anyway. I'd be afraid of a system lockup.

Mike
--
p="p=%c%s%c;main(){printf(p,34,p,34);}";main(){printf(p,34,p,34);}
This message made from 100% recycled bits.
You have found the bank of Larn.
I can explain it for you, but I can't understand it for you.
I speak only for myself, and I am unanimous in that!


[Index of Archives]     [Current Fedora Users]     [Fedora Desktop]     [Fedora SELinux]     [Yosemite News]     [Yosemite Photos]     [KDE Users]     [Fedora Tools]     [Fedora Docs]

  Powered by Linux