Paul Jackson wrote:
Shailabh wrote:
The overhead of creating cpusets just for this
reason seems excessive when the need is only to
reduce the number of sockets to monitor
As I reread this thread, some of my ancient interactions with process
accounting come to mind again.
K.I.S.S. - keep it simple, I'm telling myself.
I'm also thinking that since this is a system wide stat tool, it
wants to minimize interactions with other mechanisms.
Hog tying cpusets and process accounting together seems just
plain weird, and risks imposing conflicting demands on the cpuset
configuration of a system.
Please be so kind as to forget I suggested that ;).
What suggestion are you talking about :-)
How about a simple way to disable collection on specified CPUs.
Collecting this sort of data makes sense for certain managed system
situations, where one chooses to spend some portion of the system
tracking the rest of it.
Collecting it may put an intolerable performance impact on pedal to
the metal maximum performance beasts running on dedicated cpus/nodes.
I propose a per-cpu boolean flag to disable collection.
If this flag is set on the cpu on which a task happens to be when
exiting, then we just drop that data on the floor, silently, with no
accumulation, as quickly as we can, avoiding any system-wide locks.
Then I could run a managed job mix, collecting accounting data, on
some nodes, while running dedicated performance beasts on other nodes,
without the accounting interfering with the performance beasts.
Doing enablement/disablement on a per-CPU basis seems to fit the cpuset
framework where
jobs are closely tied to CPUs.
Otherwise, from a generic taskstats perspective, having the CPU of exit
determine the output
of exit related data seems a bit arbitrary.
Independently, the cpuset friendly customers could make use of cpusets
to help manage which jobs were on which cpus, so that they collected
their accounting data as desired. But no need for the accounting
system to be aware of that, past the state of its per-cpu flag.
Such a flag reduces the need for further (over) designing this to
handle the extreme case.
If someone has such an extreme case, they
can turn off collecting on some cpus, to get a handle on the situation.
Hmm ? Again a very cpuset'ish solution where turning off collection on a
set of cpus will mean only
a known set of tasks (aggregated under a job) get affected. In general,
this seems like a terrible
way of doing flow control.....just pick some tasks and shut their data
output out (admittedly thats
what we're doing today when data gets dropped on overflow but I guess
the aim here is to do
better)
This could be done as a variant of your idea for multiple
TASKSTATS_LISTEN_GROUP's. Essentially, for now, we would have two
GROUP's - one that drops the data on the floor, and one that collects
it. Each cpu is in either one or the other group. Later on, when the
need arises, we add support for more GROUP's that can actually collect
data.
Sorry...don't like this proposal much but others may differ.
--Shailabh
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
[Index of Archives]
[Kernel Newbies]
[Netfilter]
[Bugtraq]
[Photo]
[Stuff]
[Gimp]
[Yosemite News]
[MIPS Linux]
[ARM Linux]
[Linux Security]
[Linux RAID]
[Video 4 Linux]
[Linux for the blind]
[Linux Resources]