Re: [RFC][2.6.13-rc3-mm1] IRQ compression/sharing patch

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tuesday 26 July 2005 09:03 am, Andi Kleen wrote:
> On Tue, Jul 26, 2005 at 12:12:41AM -0700, James Cleverdon wrote:
> > Here's a patch that builds on Natalie Protasevich's IRQ compression
> > patch and tries to work for MPS boots as well as ACPI.  It is meant
> > for a 4-node IBM x460 NUMA box, which was dying because it had
> > interrupt pins with GSI numbers > NR_IRQS and thus overflowed
> > irq_desc.
> >
> > The problem is that this system has 270 GSIs (which are 1:1 mapped
> > with I/O APIC RTEs) and an 8-node box would have 540.  This is much
> > bigger than NR_IRQS (224 for both i386 and x86_64).  Also, there
> > aren't enough vectors to go around.  There are about 190 usable
> > vectors, not counting the reserved ones and the unused vectors at
> > 0x20 to 0x2F.  So, my patch attempts to compress the GSI range and
> > share vectors by sharing IRQs.
> >
> > Important safety note:  While the SLES 9 version of this patch
> > works, I haven't been able to test the -rc3-mm1 patch enclosed.  I
> > keep getting errors from the adp94xx driver.  8-(
> >
> > (Sorry about doing an attachment, but KMail is steadfastly word
> > wrapping inserted files.  I need to upgrade....)
>
> The patch seems to have lots of unrelated stuff. Can you please
> split it out?

Of course.  I'll pull out the BUG_ON()s and some of the other cleanup 
stuff into a related patch.

> BTW I plan to implement per CPU IDT vectors similar to Zwane's i386
> patch for x86-64 soon, hopefully with that things will be easier too.

I hope so, too.  The problem has been making the most of limited 
interrupt resources (vectors and IRQ numbers), when previous coding has 
assumed that we could use them lavishly.

> Andrew: this is not 2.6.13 material.
>
> > @@ -276,13 +276,13 @@ config HAVE_DEC_LOCK
> >  	default y
> >
> >  config NR_CPUS
> > -	int "Maximum number of CPUs (2-256)"
> > -	range 2 256
> > +	int "Maximum number of CPUs (2-255)"
> > +	range 2 255
> >  	depends on SMP
> > -	default "8"
> > +	default "64"
>
> Please don't change that,

Which?  The maximum number of addressable CPUs is 255, because FF is 
reserved.  Or, would you rather the default be 8 or 16?  (Hmmm....  
Dual-core, hyperthreaded CPUs are out.  They'll turn a quad box into a 
16-way.)

> > +/*
> > + * Check the APIC IDs in MADT table header and choose the APIC
> > mode. + */
> > +void acpi_madt_oem_check(char *oem_id, char *oem_table_id)
> > +{
> > +	/* May need to check OEM strings in the future. */
> > +}
> > +
> > +/*
> > + * Check the IDs in MPS header and choose the APIC mode.
> > + */
> > +void mps_oem_check(struct mp_config_table *mpc, char *oem, char
> > *productid) +{
> > +	/* May need to check OEM strings in the future. */
> > +}
>
> Can you perhaps add it then later, not now?

Naturally.  It was a placeholder for those systems where we can't figure 
out what to do using heuristics on the ACPI/MPS table data.

> > +static u8 gsi_2_irq[NR_IRQ_VECTORS] = { [0 ... NR_IRQ_VECTORS-1] =
> > 0xFF };
>
> With the per cpu IDTs we'll likely need more than 8 bits here.

OK, I'll make it u32.  Or would you rather have u16?

> > -	char str[16];
> > +	char oem[9], str[16];
> >  	int count=sizeof(*mpc);
> >  	unsigned char *mpt=((unsigned char *)mpc)+count;
> > +	extern void mps_oem_check(struct mp_config_table *mpc, char *oem,
> > char *productid);
>
> That would belong in some header if it was needed.
>
> But please just remove it for now.

Right.

> The rest looks ok.
>
> -Andi

Thanks Andi!

-- 
James Cleverdon
IBM LTC (xSeries Linux Solutions)
{jamesclv(Unix, preferred), cleverdj(Notes)} at us dot ibm dot comm
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

[Index of Archives]     [Kernel Newbies]     [Netfilter]     [Bugtraq]     [Photo]     [Gimp]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Video 4 Linux]     [Linux for the blind]
  Powered by Linux