Andrew Morton wrote:
It could be due to I/O scheduler changes. Which one are you using? CFQ?
Or it could be that there has been some changed behaviour at the VFS/pagecache
layer: the VFS might be submitting little hunks of lots of files, rather than
large hunks of few files.
Or it could be a block-layer thing: perhaps some driver change has caused
us to be placing less data into the queue. Which device driver is that machine
using?
Being a simple soul, the first thing I'll try when I get near a test box
will be
for i in $(seq 1 16)
do
time dd if=/dev/zero of=$i bs=1M count=1024 &
done
I tried first the test with dd, the results are similar to those of FFSB
tests, about 15 percent of degradation between 2.6.20.7 and 2.6.21-rc7.
I'm using the CFQ I/O scheduler. I changed it to the "deadline" one and
I don't have any more the problem, I've got similar throughput values
with 2.6.20.7 and 2.6.21-rc7 kernels.
So can we conclude that it's due to the CFQ scheduler?
I also checked the device driver used, the revision number is the same
in 2.6.20 and 2.6.21.
Valérie
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
[Index of Archives]
[Kernel Newbies]
[Netfilter]
[Bugtraq]
[Photo]
[Stuff]
[Gimp]
[Yosemite News]
[MIPS Linux]
[ARM Linux]
[Linux Security]
[Linux RAID]
[Video 4 Linux]
[Linux for the blind]
[Linux Resources]