Re: Disk defragmenter in Linux

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



<snip>

> >>However, still the question remains. If Linux ext3 doesn't need
> >>defragmenter, and able to defrag itself, what is the process name?
> >>And when does it run?  Can I see it in action? Is there an utility
> >>to see on what percentage my current defragmentation? I tried fschk
> >>but no luck

Linux does NOT need a defrager.  There is an argument to be me  that on a
LINUX system with many processes accessing a pretty dynamic file system
a bit of fragmentation can help throughput.  Take squint at:

http://www.salmar.com/pipermail/wftl-lug/2002-March/000603.html

I have run large active servers with dynamic file systems that ran Linux
for years rebooting  just to up date the OS, with  never doing more file
system  maintenance  beyond  removing  file  or  moving  them  to  other
partitions.  In  my early UNIX  days the BSD  Fast File System  used 10%
free space on  its file systems.  I note that  modern Linux file systems
are created  with 5% of free  space.  I think  that is a bit  tight even
with improvements in design. You can check free space in the values from
a df of your file systems.  The file system free space is

free space = total blocks - (used + available)

That is why you can see a file system at more than a 100% utilization.

> > The opinion that EXT2 doesn't need defragmenting is based on only a
> > filesystem-level view of the problem, and doesn't consider data read and
> > write performance.  EXT2 does make an effort to keep data only a short seek
> > away ("clustered").  With this clustering, the filesystem operations of
> > adding, removing, extending, and shortening files are not much affected by
> > fragmentation.

I do not understand this comment  as disk I/O is largely about what file
system  designs are  about.   Integrity, and  reliability are  important
considerations true enough, but if  a Linux system is servicing numerous
client processes it is receiving requests to read or write data randomly
around the disk.   With disk read-ahead,and disk address  sorting a well
designed  driver accessing  a Linux  file  system does  not notice  file
fragmentation. The fragmentation is really an issue higher up in the OS.
That part  of the file system  code which deals with  the allocations of
inodes  and data blocks  (and in  file systems  that support  them, file
system  fragments). Things can  get really  ugly here  while at  a lower
lever things tend to perk along.

File systems that support "file system fragments" use large "file system
blocks" say  4K which is the basic  disk I/O size, but  allocate data by
the fragment 1K  or 2k.  These are design issues one  of the benefits of
this design  is an aid to clustering  of data with in  a single physical
disk I/O operation and makes an accommodation to file systems with large
files measured in file system blocks and smaller files for which a small
number of fragments is appropriate.   It does complicate the file system
code, some.

Actually the Linux EXT[23] file system structures contain a fragment
size, but the fragment size is the same as the block size.

<snip>\

> > When you have only 2% free, it's just about certain that the free space is
> > a long way away from the rest of the data in a file.  Just deleting to get
> > 20% free would probably have fixed your problem.
> 
> Absolutely. Running with just that much free on a Linux system is
> insanity, anyway. I'd be afraid of a system lockup.

As I said above  I am not sure that even 5% is  enough free space, 2% is
clearly too little.

dlg


[Index of Archives]     [Current Fedora Users]     [Fedora Desktop]     [Fedora SELinux]     [Yosemite News]     [Yosemite Photos]     [KDE Users]     [Fedora Tools]     [Fedora Docs]

  Powered by Linux