Jens Axboe wrote:
Our tests indicate that the blktrace approach is fine for performance
analysis as long as the system to be analysed isn't too busy.
But once the system faces a consirable amount of non-sequential I/O
workload, the plenty of blktrace-generated data starts to get painful.
Why haven't you done an analysis and posted it here? I surely cannot fix
what nobody tells me is broken or suboptimal.
Fair enough. We have tried out the silly way of blktrace-ing, storing
data locally. So, it's probably not worthwhile discussing that.
> I have to say it's news to
me that it's performance intensive, tests I did with Alan Brunelle a
year or so ago showed it to be quite low impact.
I found some discussions on linux-btrace (Feburary 2006).
There is little information on how the alleged 2 percent impact has
been determined. Test cases seem to comprise formatting disks ...hmm.
If the system runs I/O-bound, how to write out traces without
stealing bandwith and causing side effects?
You'd be silly to locally store traces, send them out over the network.
Will try this next and post complaints, if any, along with numbers.
However, a fast network connection plus a second system for blktrace
data processing are serious requirements. Think of servers secured
by firewalls. Reading some counters in debugfs, sysfs or whatever
might be more appropriate for some one who has noticed an unexpected
I/O slowdown and needs directions for further investigation.
Martin
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
[Index of Archives]
[Kernel Newbies]
[Netfilter]
[Bugtraq]
[Photo]
[Stuff]
[Gimp]
[Yosemite News]
[MIPS Linux]
[ARM Linux]
[Linux Security]
[Linux RAID]
[Video 4 Linux]
[Linux for the blind]
[Linux Resources]