On Wed, Jun 08, 2005 at 09:09:44AM -0700, Ashok Raj wrote:
> On Wed, Jun 08, 2005 at 06:32:26AM -0700, Andi Kleen wrote:
> >
> > > I also see one minor weakness in the assumption that CPU Vectors
> > > are global. Both IA64/PARISC can support per-CPU Vector tables.
>
> One thing to keep in mind is that since now we have support for CPU hotplug
> we need to factor in cases when cpu is removed, the per-cpu vectors would
> require migrating to a new cpu far interrupt target. Which would
> possibly require vector-sharing support as well in case the vector is used
> in all other cpus.
Yes, it would need require vector migration. I suppose the way to
do this would be to not offline a CPU as long as there are devices
that have interrupts refering to a CPU. And perhaps have some /sys
file that prevents allocating new vectors to a specific CPU.
User space could then do:
Set sys file to prevent new vectors allocated to cpu FOO
Read /proc/interrupts and unload all drivers who have vectors on CPU foo.
Reload drivers.
Offline CPU.
BTW I suppose in many machines the problem would not occur
because they do hotplug not on inidividual CPUs, but nodes
which consist of CPUs,PCI busses etc. On those you could
fully avoid it by just making sure the devices on the PCI
busses of a node only allocate interrupts on the local CPUs.
Then to off line you would need to hot unplug the devices
before offlining the CPUs anyways.
>From a NUMA optimization perspective that is a worthy
goal anyways to avoid cross node traffic.
>
> Possibly irq balancer might need to be revisited as well, and potentially
> might trigger some sharing needs.
IRQ balancer is in user space on x86-64.
> A combination of
> - Not allocating IRQs to pins not used (Which Natalie from Unisys
> submitted)
> http://marc.theaimsgroup.com/?l=linux-kernel&m=111656957923038&w=2
That is already queued, but doesnt help in the general case.
> - per-cpu vector tables (long back i remember seeing some post from sgi
> on the topic, possibly under intr domains etc.. not too sure)
That is the plan.
> - vector sharing
That doesnt help in the general case again I think.
e.g. to fully cover offlining of all CPus you would need to
share all vectors over all CPUs of all devices. But that totally
defeats the original goal to get more than 255 vectors.
-Andi
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
[Index of Archives]
[Kernel Newbies]
[Netfilter]
[Bugtraq]
[Photo]
[Stuff]
[Gimp]
[Yosemite News]
[MIPS Linux]
[ARM Linux]
[Linux Security]
[Linux RAID]
[Video 4 Linux]
[Linux for the blind]
[Linux Resources]