Re: Network slowdown due to CFS

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 03/10/2007, Jarek Poplawski <[email protected]> wrote:
> I can't see anything about clearing. I think, this was about charging,
> which should change the key enough, to move a task to, maybe, a better
> place in a que (tree) than with current ways.

just a quick patch, not tested and I've not evaluated all possible
implications yet.
But someone might give it a try with his/(her -- are even more
welcomed :-) favourite sched_yield() load.

(and white space damaged)

--- sched_fair-old.c    2007-10-03 12:45:17.010306000 +0200
+++ sched_fair.c        2007-10-03 12:44:46.899851000 +0200
@@ -803,7 +803,35 @@ static void yield_task_fair(struct rq *r
                update_curr(cfs_rq);

                return;
+       } else if (sysctl_sched_compat_yield == 2) {
+               unsigned long ideal_runtime, delta_exec,
+                             delta_exec_weighted;
+
+               __update_rq_clock(rq);
+               /*
+                * Update run-time statistics of the 'current'.
+                */
+               update_curr(cfs_rq);
+
+               /*
+                * Emulate (speed up) the effect of us being preempted
+                * by scheduler_tick().
+                */
+               ideal_runtime = sched_slice(cfs_rq, curr);
+               delta_exec = curr->sum_exec_runtime -
curr->prev_sum_exec_runtime;
+
+               if (ideal_runtime > delta_exec) {
+                       delta_exec_weighted = ideal_runtime - delta_exec;
+
+                       if (unlikely(curr->load.weight != NICE_0_LOAD)) {
+                               delta_exec_weighted =
calc_delta_fair(delta_exec_weighted,
+
 &se->load);
+                       }
+                       se->vruntime += delta_exec_weighted;
+               }
+               return;
        }
+
        /*
         * Find the rightmost entry in the rbtree:
         */


>
> Jarek P.
>

-- 
Best regards,
Dmitry Adamushko
--- sched_fair-old.c	2007-10-03 12:45:17.010306000 +0200
+++ sched_fair.c	2007-10-03 12:44:46.899851000 +0200
@@ -803,7 +803,35 @@ static void yield_task_fair(struct rq *r
 		update_curr(cfs_rq);
 
 		return;
+	} else if (sysctl_sched_compat_yield == 2) {
+		unsigned long ideal_runtime, delta_exec,
+			      delta_exec_weighted;
+
+		__update_rq_clock(rq);
+		/*
+		 * Update run-time statistics of the 'current'.
+		 */
+		update_curr(cfs_rq);
+
+		/*
+		 * Emulate the effect of us being preempted
+		 * by scheduler_tick().
+		 */
+		ideal_runtime = sched_slice(cfs_rq, curr);
+		delta_exec = curr->sum_exec_runtime - curr->prev_sum_exec_runtime;
+
+		if (ideal_runtime > delta_exec) {
+			delta_exec_weighted = ideal_runtime - delta_exec;
+
+			if (unlikely(curr->load.weight != NICE_0_LOAD)) {
+				delta_exec_weighted = calc_delta_fair(delta_exec_weighted,
+									&se->load);
+			}
+			se->vruntime += delta_exec_weighted;
+		}
+		return;
 	}
+
 	/*
 	 * Find the rightmost entry in the rbtree:
 	 */

[Index of Archives]     [Kernel Newbies]     [Netfilter]     [Bugtraq]     [Photo]     [Stuff]     [Gimp]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Video 4 Linux]     [Linux for the blind]     [Linux Resources]
  Powered by Linux