On Fri, 2005-08-26 at 08:08 +0200, Ingo Molnar wrote:
> * Steven Rostedt <[email protected]> wrote:
>
> > So, the only other solutions that I can think of is:
> >
> > a) add yet another (bloat) lock to the buffer head.
> >
> > b) Still use your b_update_lock for the jbd_lock_bh_journal_head and
> > change the jbd_lock_bh_state to what I discussed earlier, and that
> > being the hash wait_on_bit code.
>
> could you try a), how clean does it get? Personally i'm much more in
> favor of cleanliness. On the vanilla kernel a spinlock is zero bytes on
> UP [the most RAM-sensitive platform], and it's a word on typical SMP.
Not only the cleanest, but also the simplest :-)
-- Steve
Signed-off-by: Steven Rostedt <[email protected]>
Index: linux_realtime_ernie/fs/buffer.c
===================================================================
--- linux_realtime_ernie/fs/buffer.c (revision 303)
+++ linux_realtime_ernie/fs/buffer.c (working copy)
@@ -3053,6 +3053,7 @@
{
BUG_ON(!list_empty(&bh->b_assoc_buffers));
BUG_ON(spin_is_locked(&bh->b_uptodate_lock));
+ BUG_ON(spin_is_locked(&bh->b_state_lock));
kmem_cache_free(bh_cachep, bh);
preempt_disable();
__get_cpu_var(bh_accounting).nr--;
@@ -3071,6 +3072,7 @@
memset(bh, 0, sizeof(*bh));
INIT_LIST_HEAD(&bh->b_assoc_buffers);
spin_lock_init(&bh->b_uptodate_lock);
+ spin_lock_init(&bh->b_state_lock);
}
}
Index: linux_realtime_ernie/include/linux/buffer_head.h
===================================================================
--- linux_realtime_ernie/include/linux/buffer_head.h (revision 303)
+++ linux_realtime_ernie/include/linux/buffer_head.h (working copy)
@@ -62,6 +62,7 @@
void *b_private; /* reserved for b_end_io */
struct list_head b_assoc_buffers; /* associated with another mapping */
spinlock_t b_uptodate_lock;
+ spinlock_t b_state_lock;
};
/*
Index: linux_realtime_ernie/include/linux/jbd.h
===================================================================
--- linux_realtime_ernie/include/linux/jbd.h (revision 303)
+++ linux_realtime_ernie/include/linux/jbd.h (working copy)
@@ -326,32 +326,32 @@
static inline void jbd_lock_bh_state(struct buffer_head *bh)
{
- bit_spin_lock(BH_State, &bh->b_state);
+ spin_lock(&bh->b_state_lock);
}
static inline int jbd_trylock_bh_state(struct buffer_head *bh)
{
- return bit_spin_trylock(BH_State, &bh->b_state);
+ return spin_trylock(&bh->b_state_lock);
}
static inline int jbd_is_locked_bh_state(struct buffer_head *bh)
{
- return bit_spin_is_locked(BH_State, &bh->b_state);
+ return spin_is_locked(&bh->b_state_lock);
}
static inline void jbd_unlock_bh_state(struct buffer_head *bh)
{
- bit_spin_unlock(BH_State, &bh->b_state);
+ spin_unlock(&bh->b_state_lock);
}
static inline void jbd_lock_bh_journal_head(struct buffer_head *bh)
{
- bit_spin_lock(BH_JournalHead, &bh->b_state);
+ spin_lock(&bh->b_uptodate_lock);
}
static inline void jbd_unlock_bh_journal_head(struct buffer_head *bh)
{
- bit_spin_unlock(BH_JournalHead, &bh->b_state);
+ spin_unlock(&bh->b_uptodate_lock);
}
struct jbd_revoke_table_s;
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
- References:
- Re: [patch] Real-Time Preemption, -RT-2.6.13-rc4-V0.7.52-01
- Re: [patch] Real-Time Preemption, -RT-2.6.13-rc4-V0.7.52-01
- Re: [patch] Real-Time Preemption, -RT-2.6.13-rc4-V0.7.52-01
- Re: [patch] Real-Time Preemption, -RT-2.6.13-rc4-V0.7.52-01
- Re: [patch] Real-Time Preemption, -RT-2.6.13-rc4-V0.7.52-01
- Re: [patch] Real-Time Preemption, -RT-2.6.13-rc4-V0.7.52-01
- Re: [patch] Real-Time Preemption, -RT-2.6.13-rc4-V0.7.52-01
- Re: [patch] Real-Time Preemption, -RT-2.6.13-rc4-V0.7.52-01
- Re: [patch] Real-Time Preemption, -RT-2.6.13-rc4-V0.7.52-01
- Re: [patch] Real-Time Preemption, -RT-2.6.13-rc4-V0.7.52-01
[Index of Archives]
[Kernel Newbies]
[Netfilter]
[Bugtraq]
[Photo]
[Gimp]
[Yosemite News]
[MIPS Linux]
[ARM Linux]
[Linux Security]
[Linux RAID]
[Video 4 Linux]
[Linux for the blind]
|
|