On Wed, 8 Feb 2006, Rik van Riel wrote:
> On Sun, 5 Feb 2006, Linux Kernel Mailing List wrote:
>
> > [PATCH] percpu data: only iterate over possible CPUs
>
> The sched.c bit breaks Xen, and probably also other architectures
> that have CPU hotplug. I suspect the reason is that during early
> bootup only the boot CPU is online, so nothing initialises the
> runqueues for CPUs that are brought up afterwards.
>
> I suspect we can get rid of this problem quite easily by moving
> runqueue initialisation to init_idle()...
Well, it works. This (fairly trivial) patch makes hotplug cpu
work again, by ensuring that the runqueues of a newly brought
up CPU are initialized just before they are needed.
Without this patch the "spin_lock_irqsave(&rq->lock, flags);"
in init_idle() would oops if CONFIG_DEBUG_SPINLOCK was set.
With this patch, things just work.
Signed-off-by: Rik van Riel <[email protected]>
--- linux-2.6.15.i686/kernel/sched.c.idle_init 2006-02-08 17:56:50.000000000 -0500
+++ linux-2.6.15.i686/kernel/sched.c 2006-02-08 17:58:57.000000000 -0500
@@ -4437,6 +4437,35 @@ void __devinit init_idle(task_t *idle, i
{
runqueue_t *rq = cpu_rq(cpu);
unsigned long flags;
+ prio_array_t *array;
+ int j, k;
+
+ spin_lock_init(&rq->lock);
+ rq->nr_running = 0;
+ rq->active = rq->arrays;
+ rq->expired = rq->arrays + 1;
+ rq->best_expired_prio = MAX_PRIO;
+
+#ifdef CONFIG_SMP
+ rq->sd = NULL;
+ for (j = 1; j < 3; j++)
+ rq->cpu_load[j] = 0;
+ rq->active_balance = 0;
+ rq->push_cpu = 0;
+ rq->migration_thread = NULL;
+ INIT_LIST_HEAD(&rq->migration_queue);
+#endif
+ atomic_set(&rq->nr_iowait, 0);
+
+ for (j = 0; j < 2; j++) {
+ array = rq->arrays + j;
+ for (k = 0; k < MAX_PRIO; k++) {
+ INIT_LIST_HEAD(array->queue + k);
+ __clear_bit(k, array->bitmap);
+ }
+ // delimiter for bitsearch
+ __set_bit(MAX_PRIO, array->bitmap);
+ }
idle->sleep_avg = 0;
idle->array = NULL;
@@ -6110,41 +6139,6 @@ int in_sched_functions(unsigned long add
void __init sched_init(void)
{
- runqueue_t *rq;
- int i, j, k;
-
- for_each_cpu(i) {
- prio_array_t *array;
-
- rq = cpu_rq(i);
- spin_lock_init(&rq->lock);
- rq->nr_running = 0;
- rq->active = rq->arrays;
- rq->expired = rq->arrays + 1;
- rq->best_expired_prio = MAX_PRIO;
-
-#ifdef CONFIG_SMP
- rq->sd = NULL;
- for (j = 1; j < 3; j++)
- rq->cpu_load[j] = 0;
- rq->active_balance = 0;
- rq->push_cpu = 0;
- rq->migration_thread = NULL;
- INIT_LIST_HEAD(&rq->migration_queue);
-#endif
- atomic_set(&rq->nr_iowait, 0);
-
- for (j = 0; j < 2; j++) {
- array = rq->arrays + j;
- for (k = 0; k < MAX_PRIO; k++) {
- INIT_LIST_HEAD(array->queue + k);
- __clear_bit(k, array->bitmap);
- }
- // delimiter for bitsearch
- __set_bit(MAX_PRIO, array->bitmap);
- }
- }
-
/*
* The boot idle thread does lazy MMU switching as well:
*/
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
[Index of Archives]
[Kernel Newbies]
[Netfilter]
[Bugtraq]
[Photo]
[Stuff]
[Gimp]
[Yosemite News]
[MIPS Linux]
[ARM Linux]
[Linux Security]
[Linux RAID]
[Video 4 Linux]
[Linux for the blind]
[Linux Resources]