Re: [PATCH 3/16] read/write_crX, clts and wbinvd for 64-bit paravirt

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Glauber de Oliveira Costa wrote:
> This patch introduces, and patch callers when needed, native
> versions for read/write_crX functions, clts and wbinvd.
>
> Signed-off-by: Glauber de Oliveira Costa <[email protected]>
> Signed-off-by: Steven Rostedt <[email protected]>
> Acked-by: Jeremy Fitzhardinge <[email protected]>
> ---
>  arch/x86/mm/pageattr_64.c   |    3 +-
>  include/asm-x86/system_64.h |   60 ++++++++++++++++++++++++++++++------------
>  2 files changed, 45 insertions(+), 18 deletions(-)
>
> diff --git a/arch/x86/mm/pageattr_64.c b/arch/x86/mm/pageattr_64.c
> index c40afba..59a52b0 100644
> --- a/arch/x86/mm/pageattr_64.c
> +++ b/arch/x86/mm/pageattr_64.c
> @@ -12,6 +12,7 @@
>  #include <asm/processor.h>
>  #include <asm/tlbflush.h>
>  #include <asm/io.h>
> +#include <asm/paravirt.h>
>  
>  pte_t *lookup_address(unsigned long address)
>  { 
> @@ -77,7 +78,7 @@ static void flush_kernel_map(void *arg)
>  	   much cheaper than WBINVD. */
>  	/* clflush is still broken. Disable for now. */
>  	if (1 || !cpu_has_clflush)
> -		asm volatile("wbinvd" ::: "memory");
> +		wbinvd();
>  	else list_for_each_entry(pg, l, lru) {
>  		void *adr = page_address(pg);
>  		clflush_cache_range(adr, PAGE_SIZE);
> diff --git a/include/asm-x86/system_64.h b/include/asm-x86/system_64.h
> index 4cb2384..b558cb2 100644
> --- a/include/asm-x86/system_64.h
> +++ b/include/asm-x86/system_64.h
> @@ -65,53 +65,62 @@ extern void load_gs_index(unsigned);
>  /*
>   * Clear and set 'TS' bit respectively
>   */
> -#define clts() __asm__ __volatile__ ("clts")
> +static inline void native_clts(void)
> +{
> +	asm volatile ("clts");
> +}
>  
> -static inline unsigned long read_cr0(void)
> -{ 
> +static inline unsigned long native_read_cr0(void)
> +{
>  	unsigned long cr0;
>  	asm volatile("movq %%cr0,%0" : "=r" (cr0));
>  	return cr0;
>  }
>   

This is a pre-existing bug, but it seems to me that these read/write crX
asms should have a constraint to stop the compiler from reordering them
with respect to each other.  The brute-force approach would be to add
"memory" clobbers, but the subtle fix would be to add a variable which
is only used to sequence:

static int __cr_seq;

static inline unsigned long native_read_cr0(void)
{
	unsigned long cr0;
	asm volatile("mov %%cr0, %0" : "=r" (cr0), "=m" (__cr_seq));
}

static inline void native_write_cr0(unsigned long val)
{
	asm volatile("mov %1, %%cr0" : "+m" (__cr_seq) : "r" (val));
}


    J
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

[Index of Archives]     [Kernel Newbies]     [Netfilter]     [Bugtraq]     [Photo]     [Stuff]     [Gimp]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Video 4 Linux]     [Linux for the blind]     [Linux Resources]
  Powered by Linux