* john stultz <[email protected]> wrote:
> - if (!cpu_isset(smp_processor_id(), irq_affinity[irq])) {
> - mask = cpumask_of_cpu(any_online_cpu(irq_affinity[irq]));
> - set_cpus_allowed(current, mask);
> - }
this is intentional, not a bug. The point of the code above is to ensure
that every IRQ handler is executed on one CPU. I.e. the irq threads are
semi-affine - they are strictly affine when executing a handler, but
they may switch CPUs if the affinity mask points it elsewhere.
but ... we might be able to relax that. Potentially some IRQ handlers
might assume per-cpu-ness, but that should be uncovered by
DEBUG_PREEMPT's smp_processor_id() checks.
> + if(!cpus_equal(current->cpus_allowed, irq_affinity[irq]));
> + set_cpus_allowed(current, irq_affinity[irq]);
> The patch below appears to correct this issue, however it also
> repeatedly(on different irqs) causes the following BUG:
ah. This actually uncovered a real bug. We were calling __do_softirq()
with interrupts enabled (and being preemptible) - which is certainly
bad.
this was hidden before because the smp_processor_id() debugging code
handles tasks bound to a single CPU as per-cpu-safe.
could you check the (totally untested) patch below and see if that fixes
things for you? I've also added your affinity change.
Ingo
Index: linux-rt.q/kernel/irq/manage.c
===================================================================
--- linux-rt.q.orig/kernel/irq/manage.c
+++ linux-rt.q/kernel/irq/manage.c
@@ -717,24 +717,21 @@ static int do_irqd(void * __desc)
if (param.sched_priority > 25)
curr_irq_prio = param.sched_priority - 1;
-// param.sched_priority = 1;
sys_sched_setscheduler(current->pid, SCHED_FIFO, ¶m);
while (!kthread_should_stop()) {
set_current_state(TASK_INTERRUPTIBLE);
do_hardirq(desc);
cond_resched_all();
+ local_irq_disable();
__do_softirq();
-// do_softirq_from_hardirq();
local_irq_enable();
#ifdef CONFIG_SMP
/*
* Did IRQ affinities change?
*/
- if (!cpu_isset(smp_processor_id(), irq_affinity[irq])) {
- mask = cpumask_of_cpu(any_online_cpu(irq_affinity[irq]));
- set_cpus_allowed(current, mask);
- }
+ if (!cpus_equal(current->cpus_allowed, irq_affinity[irq]));
+ set_cpus_allowed(current, irq_affinity[irq]);
#endif
schedule();
}
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
[Index of Archives]
[Kernel Newbies]
[Netfilter]
[Bugtraq]
[Photo]
[Stuff]
[Gimp]
[Yosemite News]
[MIPS Linux]
[ARM Linux]
[Linux Security]
[Linux RAID]
[Video 4 Linux]
[Linux for the blind]
[Linux Resources]