On Mon, May 23, 2005 at 09:54:51AM -0700, Ashok Raj wrote:
> On Mon, May 23, 2005 at 06:40:46PM +0200, Andi Kleen wrote:
> > On Fri, May 20, 2005 at 03:16:22PM -0700, Ashok Raj wrote:
> > > Andi: You had mentioned that you would not prefer to replace the broadcast IPI
> > > with the mask version for performance. Currently this seems to be the
> > > most optimal way without putting a sledge hammer on the cpu_up process.
> >
> > I already put a sledgehammer to __cpu_up with that last
>
> Yours was a good sledge hammer :-) the way it should have been done
> but carried legacy boot from i386 that wasnt pretty. The one iam referring
> to is pretty darn slow, and think it wont be liked my many to slow down the
> system just to add a new cpu.
>
> > patch. Some more hammering surely wouldnt be a big issue. Unlike i386
> > we actually still have a chance to test all relevant platforms, so I
> > dont think it is a big issue.
> >
> > What changes did you plan?
>
> The only other workable alternate would be to use the stop_machine()
> like thing which we use to automically update cpu_online_map. This means we
> execute a high priority thread on all cpus, bringing the system to knees before
That is not nice agreed.
> just adding a new cpu. On very large systems this will definitly be
> visible.
I still dont quite get it why it is not enough to keep interrupts
off until the CPU enters idle. Currently we enable them shortly
in the middle of the initialization (whcih is already dangerous
because interrupts can see half initialized state like out of date TSC),
but I hope to get rid of that soon too. With the full startup
in CLI would you problems be gone?
>
> Just curious, what performance impact did you allude to that would be lost
> if we dont use the shortcut IPI version?
I am worried about the TLB flush interrupt. I used to have
some workloads in 2.4 that stresses it very badly (e.g. process
with working set just above the physical memory. It would always
fault in new pages while on another CPU kswapd would unmap
and age pages. Leads to a constant flood of flush IPIs). Another
case is COW in a multithreaded process. You always have to flush
all the other CPUs there.
Even smp_call_function is a bit of an issue in slab intensive
loads because the per CPU slab cache relies on them. I dont
think it is that big an issue as the flush above, but still
would be better to keep it fast.
> > P.S.: An alternative would be to define a new genapic subarch that
> > you only enable when you detect cpuhotplug support at boot.
> >
>
> There is nothing currently there to find out if something is hotplug
> capable in a generic way at platform level, other than adding cmdline options
> etc.
When you have the command line option you can do it. Later I guess
you will have a way to get it from ACPI (e.g. CPUs in tables
but marked inactive etc.).
> Also FYI: ACPI folks are experimenting using CPU hotplug to suspend/resume
> support. So hotplug support may be required not just on platforms that support
> it but also for other related uses.
I am aware of that. Also the virtualization people will likely use it.
-Andi
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
[Index of Archives]
[Kernel Newbies]
[Netfilter]
[Bugtraq]
[Photo]
[Stuff]
[Gimp]
[Yosemite News]
[MIPS Linux]
[ARM Linux]
[Linux Security]
[Linux RAID]
[Video 4 Linux]
[Linux for the blind]
[Linux Resources]