Re: [PATCH] remove volatile from x86 cmos_lock

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, 2006-07-14 at 17:50 -0400, Dave Jones wrote:
> On Fri, Jul 14, 2006 at 11:09:25AM -0400, Steven Rostedt wrote:
>  > On Fri, 2006-07-14 at 16:53 +0200, Oliver Neukum wrote:
>  > > Am Freitag, 14. Juli 2006 16:48 schrieb Steven Rostedt:
>  > > > @@ -52,14 +54,16 @@ static inline void lock_cmos(unsigned ch
>  > > >  
>  > > >  static inline void unlock_cmos(void)
>  > > >  {
>  > > > -       cmos_lock = 0;
>  > > > +       set_wmb(cmos_lock, 0);
>  > > >  }
>  > > >  static inline int do_i_have_lock_cmos(void)
>  > > >  {
>  > > > +       barrier();
>  > > 
>  > > Shouldn't these be rmb() ?
>  > 
>  > I was thinking that too, but I'm still not sure when to use rmb or
>  > barrier.  wmb seems pretty straight forward though.  hmm, maybe this
>  > really should be a smb_rmb since I believe a barrier would be ok for UP.
> 
> I'm more puzzled why it's inventing its own locking primitives instead of
> using one of the many the kernel provides.  This stuff is prehistoric though.
> Hangover from the really early days ?
> 
> 		Dave
> 

Here's your answer:

from rtc.hinclude/asm-i386/mc146818rtc.h:

/*
 * This lock provides nmi access to the CMOS/RTC registers.  It has some
 * special properties.  It is owned by a CPU and stores the index register
 * currently being accessed (if owned).  The idea here is that it works
 * like a normal lock (normally).  However, in an NMI, the NMI code will
 * first check to see if its CPU owns the lock, meaning that the NMI
 * interrupted during the read/write of the device.  If it does, it goes ahead
 * and performs the access and then restores the index register.  If it does
 * not, it locks normally.
 *
 * Note that since we are working with NMIs, we need this lock even in
 * a non-SMP machine just to mark that the lock is owned.
 *
 * This only works with compare-and-swap.  There is no other way to
 * atomically claim the lock and set the owner.
 */

So it is dealing with NMIs.  Since we are dealing with a lock that can
be taken from NMI context, we can't just spin if it is locked.  We need
to check if the CPU that owns the lock is the same as the NMI CPU and if
so just go on and run, otherwise you deadlock.

So really the only thing missing here are the memory barriers since it
is not really protecting anything due to CPU reordering.

(I need to get rid of that set_wmb :P )

-- Steve


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

[Index of Archives]     [Kernel Newbies]     [Netfilter]     [Bugtraq]     [Photo]     [Stuff]     [Gimp]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Video 4 Linux]     [Linux for the blind]     [Linux Resources]
  Powered by Linux