Guy Fraser wrote:
Finally were back to the original post.
I am not a guru either, but have been administrating Unix systems
since the 1980's. I have not found fragmentation to be a significant
cause of performance problems on any Unix or Linux machines. Although
fragmentation does occur, most Unix and Linux file systems are
designed to minimize fragmentation and maximize utilization. Many
Unix and Linux file systems try to write files using multiple contiguous
blocks. Each block is made up of a number of fragments, the number of
fragments per block will depend on the drive size and other parameters.
The terminology for fragment confuses this discussion, but may also be
the cause of the initial posting. This forum is not well suited to
discussing how files are allocated, because there are too many
different file systems that use different algorithms to determine
when to allocate space for a file in a fragment. In basic terms
Untrue in this context, as the OP specifically requested to find
a defragmenter for ext3. That's what led to the claim that
a defragmenter is not necessary for ext3, as it has some inherent
immunities to fragmentation.
Another question, which AFAIK remains unaswered, though posed
by Ed Hill, is just what is the performance degradation which
might be suffered. Unfortunately, that is completely dependent
on the use to which the file is put, and how often it is read.
Most (all today?) disc drives have read-ahead caching built into
the drive, so that reads to sequential sectors are quite a bit
faster than random reads, even when no seek is necessary.
Mike
--
p="p=%c%s%c;main(){printf(p,34,p,34);}";main(){printf(p,34,p,34);}
This message made from 100% recycled bits.
You have found the bank of Larn.
I can explain it for you, but I can't understand it for you.
I speak only for myself, and I am unanimous in that!