Re: I/O statistics per process

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



hi!

didn`t discover that there is anything new about this (andrew? jay?) or if some other person sent a patch , but i`d like to report that i came across a really nice tool which would immediately benefit from per-process i/o statistics feature.

please - this mail is not meant to clamor for such feature - it`s just to show up some more benefits if this feature would exist.

vmktree at http://vmktree.org/ is some really nice monitoring tool being able to graph performance statistics for a host running vmware virtual machines (closed source - evil - i know ;) - and it can break that statistics down to each virtual machine.

what`s hurting mostly here is that you have no clue how much I/O each of those virtual machine is generating - you may give sort of a "guess" that a machine with 100% idle cpu will not generate any significant amount of I/O, but vmktree would be so much more powerful with a per-process I/O statistic, since you can use your systems more efficient, because you would know about you I/O hogs, too.

having the ability to take such information from /proc would be a real killer feature - good for troubleshooting and also good for getting important statistics!

roland

ps:
this is another person, desperately seeking for a tool providing that information: http://www.tek-tips.com/viewthread.cfm?qid=1284288&page=4




----- Original Message ----- From: "Andrew Morton" <[email protected]>
To: "Jay Lan" <[email protected]>
Cc: "roland" <[email protected]>; "Fengguang Wu" <[email protected]>;
<[email protected]>; <[email protected]>
Sent: Thursday, September 28, 2006 11:14 PM
Subject: Re: I/O statistics per process


On Thu, 28 Sep 2006 15:00:17 -0700
Jay Lan <[email protected]> wrote:

>>>    in __set_page_dirty_[no]buffers().) (But that ends up being wrong
>>> if
>>>    someone truncates the file before it got written)
>>>
>>> - it doesn't account for file readahead (although it easily could)
>>>
>>> - it doesn't account for pagefault-initiated readahead (it could)
>>>

Mmm, i am not a true FS I/O person. The data collection patches i
submitted in Nov 2004 was the code i inherited and has been
used in production system by our CSA customers. We lost a bit in
contents and accuracy when CSA was ported from IRIX to Linux. I am
sure there is room for improvement without much overhead.

OK, well it sounds like we're free to define these in any way we like.  So
we actually get to make them mean something useful - how nice.

I hereby declare: "approxmiately equal to the number of filesystem bytes
which this task has caused to occur, or which shall occur in the near
future".

Maybe FS
I/O guys can chip in?

I used to be one of them.  I can take a look at doing this.  Given the
lack
of any applciation to read the darn numbers out I guess I'll need to
expose
them in /proc for now.  Yes, that monitoring patch (and an application to
read from it!) would be appreciated, thanks.


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

[Index of Archives]     [Kernel Newbies]     [Netfilter]     [Bugtraq]     [Photo]     [Stuff]     [Gimp]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Video 4 Linux]     [Linux for the blind]     [Linux Resources]
  Powered by Linux