Re: [PATCH] bsg, block layer sg

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Mar 06 2006, Andrew Morton wrote:
> Jens Axboe <[email protected]> wrote:
> >
> > ...
> > > 
> > > If you expand the two above statements you get:
> > > 
> > > 	spin_lock_irqsave(q->queue_lock, flags);
> > > 	__elv_add_request(q, rq, where, plug);
> > > 	spin_unlock_irqrestore(q->queue_lock, flags);
> > > 	spin_lock_irq(q->queue_lock);
> > > 	__generic_unplug_device(q);
> > > 	spin_unlock_irq(q->queue_lock);
> > > 
> > > which is a bit sad.
> > 
> > Indeed, I'll do the locking manually and use the __ functions.
> 
> blk_execute_rq_nowait() and pkt_generic_packet() also do the above two
> calls.   It might be worth creating a new library function.

Yes it might, there are other call sites like this in the kernel. But
it's basically blk_execute_rq_nowait(). I'll make that change.

> > > > +static int bsg_complete_all_commands(struct bsg_device *bd)
> > > > +{
> > > > +	struct bsg_command *bc;
> > > > +	int ret, tret;
> > > > +
> > > > +	dprintk("%s: entered\n", bd->name);
> > > > +
> > > > +	set_bit(BSG_F_BLOCK, &bd->flags);
> > > > +
> > > > +	/*
> > > > +	 * wait for all commands to complete
> > > > +	 */
> > > > +	ret = 0;
> > > > +	do {
> > > > +		ret = bsg_io_schedule(bd, TASK_UNINTERRUPTIBLE);
> > > > +		/*
> > > > +		 * look for -ENODATA specifically -- we'll sometimes get
> > > > +		 * -ERESTARTSYS when we've taken a signal, but we can't
> > > > +		 * return until we're done freeing the queue, so ignore
> > > > +		 * it.  The signal will get handled when we're done freeing
> > > > +		 * the bsg_device.
> > > > +		 */
> > > > +	} while (ret != -ENODATA);
> > > > +
> > > > +	/*
> > > > +	 * discard done commands
> > > > +	 */
> > > 
> > > Would it be useful to reap the completed commands earlier?  While their
> > > predecessors are still in flight?
> > 
> > Not sure I follow... You mean if someone attempts to queue and fails
> > because we are out of commands, then try and reap some done commands?
> 
> Rather than waiting for all commands to complete then freeing them all
> up, it might be possible to free some of them earlier, while the rest
> are still in flight.  Pipeline the reaping with the I/O completion.  A
> minor saving in cycles and memory, perhaps.   Probably it's not worth
> the complexity - I was just asking ;)

For the example above, it's the final shutdown case just waiting for all
commands to complete. So definitely not worth extra complexity :-)

-- 
Jens Axboe

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

[Index of Archives]     [Kernel Newbies]     [Netfilter]     [Bugtraq]     [Photo]     [Stuff]     [Gimp]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Video 4 Linux]     [Linux for the blind]     [Linux Resources]
  Powered by Linux