Re: Disk defragmenter in Linux

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, 2005-30-12 at 12:56 -0500, Ed Hill wrote:
> On Fri, 2005-12-30 at 10:10 -0600, Mike McCarty wrote:
> > Again, the point was that some claim that ext3 does not and will
> > not fragment files which are not dynamic. I claimed that fragmentation
> > can occur simply due to install of software, which some claimed
> > will not and does not occur with ext3. I think that I have demonstrated
> > my point. In fact, I was quite shocked that it was as bad as that,
> > frankly.
> 
> Hi Mike,
> 
> OK, fragmentation can and sometimes does occur.  You've explained why
> and how.
> 
> So the next logical question is: what difference, if any, does it make?
> Can you or anyone else come up with a way to measure the effect or some
> aspect of it?  Perhaps a benchmark that shows how application startup
> times suffer?
> 
> I'm not a filesystems guru, but even so its not at all clear to me that
> fragmentation must necessarily cause a big or repeated performance hit.
> Given Linux's VM, it seems plausible that an initial file load might
> suffer (maybe a lot or maybe a tiny bit?) and that subsequent file
> accesses will be from pages already cached in RAM.  
> 
> We should all keep open minds and, if possible, generate some actual
> benchmark data!
> 
> Ed
> 
Finally were back to the original post.

I am not a guru either, but have been administrating Unix systems 
since the 1980's. I have not found fragmentation to be a significant 
cause of performance problems on any Unix or Linux machines. Although 
fragmentation does occur, most Unix and Linux file systems are 
designed to minimize fragmentation and maximize utilization. Many 
Unix and Linux file systems try to write files using multiple contiguous
blocks. Each block is made up of a number of fragments, the number of 
fragments per block will depend on the drive size and other parameters.
The terminology for fragment confuses this discussion, but may also be 
the cause of the initial posting. This forum is not well suited to 
discussing how files are allocated, because there are too many 
different file systems that use different algorithms to determine 
when to allocate space for a file in a fragment. In basic terms 
many file systems allocate as many complete blocks as possible when 
creating a file and put whats left into a fragments. When a file with
fragments is updated and grows enough to fill more complete blocks 
many Unix and Linux file systems attempt to allocate full blocks to 
move the new data and the data in the fragments into more complete 
blocks. If there is sufficient space available many file systems 
try to ensure files are allocated a contiguous range of blocks.
Even if all files have as many contiguous full blocks assigned as
possible, some the data will likely be stored in fragments. When 
running fsck, the fragmentation reported is the number of fragments 
allocated in comparison to the number of files allocated, and does not 
necessarily indicate that files are scattered all over the drive.

In closing, the term fragmentation can mean two different things 
on Linux and Unix file systems, but generally means only one thing 
in the Windows world. The issue can be quite confusing when 
administrators who come from an Windows background talk about 
fragmentation with there Linux and Unix counterparts, because they 
may not be talking about the same thing. I believe an article on 
the new file system MS is working on indicates, it will be more 
like Linux and Unix file systems with respect to the inherent
housekeeping and file allocation mechanisms, that reduce the 
scattering effect.




[Index of Archives]     [Current Fedora Users]     [Fedora Desktop]     [Fedora SELinux]     [Yosemite News]     [Yosemite Photos]     [KDE Users]     [Fedora Tools]     [Fedora Docs]

  Powered by Linux