Re: [PATCH] Change ll_rw_block() calls in JBD

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



  Hello,

> I have got a concern about this code fragment:
> 
> >+		if (buffer_dirty(bh)) {
> >+			if (test_set_buffer_locked(bh)) {
> >+				...
> >+				get_bh(bh);
> >+				spin_unlock(&journal->j_list_lock);
> >+				/* Submit all accumulated buffers to prevent
> >+				 * possible deadlocks */
> >+				submit_buffers(bufs, wbuf);
> >+				bufs = 0;
> >+				lock_buffer(bh);
> >+				spin_lock(&journal->j_list_lock);
> >+				/* Someone already cleaned up the buffer? */
> >+				if (!buffer_jbd(bh)
> >+					|| jh->b_jlist != BJ_SyncData) {
> >+					unlock_buffer(bh);
> >+					BUFFER_TRACE(bh, "already cleaned 
> >up");
> >+					put_bh(bh);
> >+					continue;
> >+				}
> >+				put_bh(bh);
> >+			}
> >+			if (test_clear_buffer_dirty(bh)) {
> >+				BUFFER_TRACE(bh, "needs writeout, 
> >submitting");
> >+				get_bh(bh);
> >+				wbuf[bufs++] = bh;
> >+				if (bufs == journal->j_wbufsize) {
> >+					spin_unlock(&journal->j_list_lock);
> >+					/* Writeout will be done at the
> >+					 * beginning of the loop */
> >+					goto write_out_data;
> >+				}
> >+			}
> 
> We lock up to "->j_wbufsize" buffers, one after the others.
> 
> Originally, we toke a buffer, we did something to it, and we released it.
> When we wanted two of them and the second one was not available, we
> released the first one, too, in order to avoid dead-locks.
> 
> Keeping a couple of buffer locked for an unpredictably long time...
> (Here you keep N-1 buffer locked while you are waiting for the Nth one.)
> And not imposing / respecting any locking order...
  You are not right here. This is exactly why we do submit_buffers()
before we do lock_buffer. So either we get the lock without waiting via
test_set_buffer_locked() or we send all buffers for IO (and hence they
will get unlocked eventually).

> The original code did not lock the buffers while it was composing the
> list of buffers. "ll_rw_block()" locked one by one the buffers
> and submitted them to the BIO. These buffers got eventually unlocked,
> possibly before the some last buffers got locked by "ll_rw_block()".
> 
> This change implies an important difference in locking behavior.
  I agree the locking behaviour has changed a bit. But not as much as you
thought.

								Honza
-- 
Jan Kara <[email protected]>
SuSE CR Labs
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

[Index of Archives]     [Kernel Newbies]     [Netfilter]     [Bugtraq]     [Photo]     [Stuff]     [Gimp]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Video 4 Linux]     [Linux for the blind]     [Linux Resources]
  Powered by Linux