Re: [PATCH 2/2] Initial generic hypertransport interrupt support.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Benjamin Herrenschmidt <[email protected]> writes:

>> Examples? details? patches?
>> 
>> Part of the problem with plain MSI is that you can't mask irqs at the
>> source, in a generic way.
>
> This is an interesting point, because this shows precisely the different
> of approach between most HW PPC implementations vs what x86 does and
> what the current code does...
>
> Our MSIs are always routed as just additional sources to an existing IRQ
> controller that will itself then handle things like masking, affinity,
> etc...

Interesting.  In the short term I can see how that is a sane design.
In the long term that appears to require more hardware and be a
bottle neck on the number of irqs you can have so I don't expect
the current ppc model to be the long term one.  

Does the current ppc model give hypervisors more control then they
would get without going through an existing interrupt controller?

> Thus, we don't need nor want any kind of generic MSI code setting up an
> irq_chip with enable/disable functions etc... Those should stay the ones
> from the system's main PIC, maybe a different version of them, but
> that's up to the system PIC to set that up.

For the hypertransport case I have coded it this way and I think there
are good general cleanliness arguments for making this change to x86 as
well.  I think doing so actually winds up being less code.

> That's one of the reason why I think we need to work the MSI arch side
> API such that the MSI "controller" is the one to setup the irq_desc. The
> "generic" mask/unmask/etc... using the MSI(X) config space can be
> provided as optional helpers but it should be under arch, or more
> specifically MSI controller control to pick up how to setup the irq_desc
> and its associated irq_chip.
>
> Another one is the fact that we have multiple different MSI mecanisms on
> the same machines (like the Apple Quad G5 which have on one side an MSI
> "register" that devices write to and that triggers sources on the MPIC,
> and on the other side, HT interrupts which can be generated from MSIs by
> the broadcom HT<->PCIe bridge). Thus that msi_ops stucture I've seen
> around shall really be per PCI host bridge at the very least. One
> propsal I have, but I didn't have time to actually implement it, was to
> have the msi_ops pointer be a field in pci_bus that is inherited by
> default. That is, the arch can call pci_set_msi_ops() on a given bus and
> this will propagate to childs.


So to be very concise what I did on the HT side is that I have a function:
arch_setup_ht_irq(struct pci_dev, int irq).
That is used by the arch code to setup the irq_chip handler and the rest.

There are helper functions:
void write_ht_irq_low(unsigned int irq, u32 data);
void write_ht_irq_high(unsigned int irq, u32 data);
u32 read_ht_irq_low(unsigned int irq);
u32 read_ht_irq_high(unsigned int irq);

That read and write the individual hypertransport irq routing values per irq.

I suspect this is the right general direction to move the msi code to.

The combination allows an architecture to take bus specific details into
an account, and have it's own irq handler methods.  At the same time
it is still the generic infrastructure controlling access to the
chip registers.

So I honest expect things like msi_ops to become an arch specific detail.
At the very least I think it is too early to start generalizing that way
if we really don't need to.

Eric
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

[Index of Archives]     [Kernel Newbies]     [Netfilter]     [Bugtraq]     [Photo]     [Stuff]     [Gimp]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Video 4 Linux]     [Linux for the blind]     [Linux Resources]
  Powered by Linux