Re: [ckrm-tech] [PATCH 0/4] sched: Add CPU rate caps

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Peter Williams wrote:
Balbir Singh wrote:

Peter Williams wrote:


<snip>

Is it possible that the effective tasks
is greater than the limit of the group?


Yes.

How do we handle this scenario?


You've got the problem back to front. If the number of effective tasks is less than the group limit then you have the situation that needs special handling (not the other way around). I.e. if the number of effective tasks is less than the group limit then (strictly speaking) there's no need to do any capping at all as the demand is less than the limit. However, in the case where the group limit is less than one CPU (i.e. less than 1000) the recommended thing to do would be set the limit of each task in the group to the group limit.

Obviously, group limits can be greater than one CPU (i.e. 1000).

The number of CPUs on the system also needs to be taken into account for group capping as if the group cap is greater than the number of CPUs there's no way it can be exceeded and tasks in this group would not need any processing.


What if we have a group limit of 100 (out of 1000) and 150 effective tasks in
the group? How do you calculate the cap of each task?
I hope my understanding of effective tasks is correct.

<snip>


I should have elaborated here that (conceptually) modifying this code to apply caps to groups of tasks instead of individual tasks is simple. It mainly involves moving most the data (statistics plus cap values) to a group structure and then modifying the code to update statistics for the group instead of the task and then make the decisions about whether a task should have a cap enforced (i.e. moved to one of the soft cap priorities or sin binned) based on the group statistics.

However, maintaining and accessing the group statistics will require additional locking as the run queue lock will no longer be able to protect the data as not all tasks in the group will be associated with the same CPU. Care will be needed to ensure that this new locking doesn't lead to dead locks with the run queue locks.

In addition to the extra overhead caused by these locking requirements, the code for gathering the statistics will need to be more complex also adding to the overhead. There is also the issue of increased serialization (there is already some due to load balancing) of task scheduling to be considered although, to be fair, this increased serialization will be within groups.



The f-series CPU controller does all of what you say in 403 lines (including
comments and copyright). I think the biggest advantage of maintaining the
group statistics in the kernel is that certain scheduling decisions can be made based on group statistics rather than task statistics, which makes the
mechanism independent of the number of tasks in the group (isolates the
groups from changes in number of tasks).


Yes, that's one of its advantages. Both methods have advantages and disadvantages.

Peter


--
	Cheers,
	Balbir Singh,
	Linux Technology Center,
	IBM Software Labs
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

[Index of Archives]     [Kernel Newbies]     [Netfilter]     [Bugtraq]     [Photo]     [Stuff]     [Gimp]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Video 4 Linux]     [Linux for the blind]     [Linux Resources]
  Powered by Linux