Re: [patch] x86: make delay_tsc() preemptible again

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



* Peter Zijlstra <[email protected]> wrote:

> > +		prev_cpu = smp_processor_id();
> >  		rep_nop();
> > +		preempt_enable();
> 
> Why not have the rep_nop() here between the enable, and disable ?

yes, indeed - fixed.

> > +		/*
> > +		 * If we preempted we skip this small amount of time:
>                            ^ migrated, perhaps?

yeah, fixed.

updated patch below.

	Ingo

---------------->
Subject: x86: make delay_tsc() preemptible again
From: Ingo Molnar <[email protected]>

make delay_tsc() preemptible again.

Signed-off-by: Ingo Molnar <[email protected]>
Signed-off-by: Peter Zijlstra <[email protected]>
---
 arch/x86/lib/delay_32.c |   27 ++++++++++++++++++++++-----
 arch/x86/lib/delay_64.c |   29 +++++++++++++++++++++++------
 2 files changed, 45 insertions(+), 11 deletions(-)

Index: linux/arch/x86/lib/delay_32.c
===================================================================
--- linux.orig/arch/x86/lib/delay_32.c
+++ linux/arch/x86/lib/delay_32.c
@@ -38,17 +38,34 @@ static void delay_loop(unsigned long loo
 		:"0" (loops));
 }
 
-/* TSC based delay: */
+/*
+ * TSC based delay:
+ *
+ * We are careful about preemption as TSC's are per-CPU.
+ */
 static void delay_tsc(unsigned long loops)
 {
-	unsigned long bclock, now;
+	unsigned long prev, now;
+	long left = loops;
+	int prev_cpu, cpu;
 
-	preempt_disable();		/* TSC's are per-cpu */
-	rdtscl(bclock);
+	preempt_disable();
+	rdtscl(prev);
 	do {
+		prev_cpu = smp_processor_id();
+		preempt_enable();
 		rep_nop();
+		preempt_disable();
+		cpu = smp_processor_id();
 		rdtscl(now);
-	} while ((now-bclock) < loops);
+		/*
+		 * If we migrated we skip this small amount of time:
+		 */
+		if (prev_cpu != cpu)
+			prev = now;
+		left -= now - prev;
+		prev = now;
+	} while (left > 0);
 	preempt_enable();
 }
 
Index: linux/arch/x86/lib/delay_64.c
===================================================================
--- linux.orig/arch/x86/lib/delay_64.c
+++ linux/arch/x86/lib/delay_64.c
@@ -26,17 +26,34 @@ int read_current_timer(unsigned long *ti
 	return 0;
 }
 
+/*
+ * TSC based delay:
+ *
+ * We are careful about preemption as TSC's are per-CPU.
+ */
 void __delay(unsigned long loops)
 {
-	unsigned bclock, now;
+	unsigned long prev, now;
+	long left = loops;
+	int prev_cpu, cpu;
 
-	preempt_disable();		/* TSC's are pre-cpu */
-	rdtscl(bclock);
+	preempt_disable();
+	rdtscl(prev);
 	do {
-		rep_nop(); 
+		prev_cpu = smp_processor_id();
+		preempt_enable();
+		rep_nop();
+		preempt_disable();
+		cpu = smp_processor_id();
 		rdtscl(now);
-	}
-	while ((now-bclock) < loops);
+		/*
+		 * If we migrated we skip this small amount of time:
+		 */
+		if (prev_cpu != cpu)
+			prev = now;
+		left -= now - prev;
+		prev = now;
+	} while (left > 0);
 	preempt_enable();
 }
 EXPORT_SYMBOL(__delay);
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

[Index of Archives]     [Kernel Newbies]     [Netfilter]     [Bugtraq]     [Photo]     [Stuff]     [Gimp]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Video 4 Linux]     [Linux for the blind]     [Linux Resources]
  Powered by Linux