Re: [Patch 0/5] I/O statistics through request queues

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Frank Ch. Eigler wrote:
Hi -

On Thu, Oct 26, 2006 at 03:37:54PM +0200, Martin Peschke wrote:
[...]
lookup_table[key] = value	, or
lookup_table[key]++

How does this scale?

It depends.  If one is interested in only aggregates as an end result,
then intermediate totals can be tracked individiaully per-cpu with no
locking contention, so this scales well.

Sorry for not being clear.
I meant scaling with regard to lots of different keys.
This is what you have described as "By 'rows'", isn't it?

For example, if I wanted to store a timestamp for each request
issued, and there were lots of devices and the I/O was driving
the system crazy - how would affect that lookup time?

I would use, say, the address of struct request as key. I would
store the start time of a request there. Once a requests completes
I would look up the start time and calculate a latency.

I would be done with that lookup table entry then.
But it won't go away, will it? Is this an issue?

It must be someting else than an array, because key boundaries
aren't known when the lookup table is created, right?
And actual keys might be few and far between.

In systemtap, we use a hash table.

What if the heap of intermediate results grows into thousands or
more?  [...]

It depends whether you mean "rows" or "columns".

By "rows", if you need to track thousands of queues, you will need
memory to store some data for each of them.  In systemtap's case, the
maximum number of elements in a hash table is configurable, and is all
allocated at startup time.  (The default is a couple of thousand.)
This is of course still larger than enlarging the base structures the
way your code does.  But it's only larger by a constant amount, and
makes it unnecessary to patch the code.

By "columns", if you need to track statistical aggregates of thousands
of data points for an individual queue, then one can use a handful of
fixed-size counters, as you already have for histograms.


Anyway, my point was not that you should use systemtap proper, or that
you need to use the same techniques for managing data on the side.
It's that by using instrumentation markers, more things are possible.

Right, I have gone off on a tangent - systemtap, just one out of many
likely exploiters of markers.

Anyway, I think this discussion shows that any dynamically added client
of kernel markers which needs to hold extra data for entities like
requests might be difficult to be implemented efficiently (compared
to static instrumentation), because markers, by nature, only allow
for code additions, but not for additions to existing data structures.

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

[Index of Archives]     [Kernel Newbies]     [Netfilter]     [Bugtraq]     [Photo]     [Stuff]     [Gimp]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Video 4 Linux]     [Linux for the blind]     [Linux Resources]
  Powered by Linux