Re: [PATCH 1/6] UML - Console locking fixes

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, 29 Dec 2006 18:41:27 -0500 Jeff Dike wrote:

>  arch/um/drivers/line.c          |  188 ++++++++++++++++++++++++++--------------
>  arch/um/drivers/stdio_console.c |    6 -
>  arch/um/include/line.h          |   14 +-
>  3 files changed, 135 insertions(+), 73 deletions(-)
> 
> Index: linux-2.6.18-mm/arch/um/drivers/line.c
> ===================================================================
> --- linux-2.6.18-mm.orig/arch/um/drivers/line.c	2006-12-29 15:12:45.000000000 -0500
> +++ linux-2.6.18-mm/arch/um/drivers/line.c	2006-12-29 17:26:26.000000000 -0500
> @@ -421,42 +420,84 @@ int line_setup_irq(int fd, int input, in
>  	return err;
>  }
>  
> +/* Normally, a driver like this can rely mostly on the tty layer

/*
 * Normally, ...

> + * locking, particularly when it comes to the driver structure.
> + * However, in this case, mconsole requests can come in "from the
> + * side", and race with opens and closes.
> + *
> + * The problem comes from line_setup not wanting to sleep if
> + * the device is open or being opened.  This can happen because the
> + * first opener of a device is responsible for setting it up on the
> + * host, and that can sleep.  The open of a port device will sleep
> + * until someone telnets to it.
> + *
> + * The obvious solution of putting everything under a mutex fails
> + * because then trying (and failing) to change the configuration of an
> + * open(ing) device will block until the open finishes.  The right
> + * thing to happen is for it to fail immediately.
> + *
> + * We can put the opening (and closing) of the host device under a
> + * separate lock, but that has to be taken before the count lock is
> + * released.  Otherwise, you open a window in which another open can
> + * come through and assume that the host side is opened and working.
> + *
> + * So, if the tty count is one, open will take the open mutex
> + * inside the count lock.  Otherwise, it just returns. This will sleep
> + * if the last close is pending, and will block a setup or get_config,
> + * but that should not last long.
> + *
> + * So, what we end up with is that open and close take the count lock.
> + * If the first open or last close are happening, then the open mutex
> + * is taken inside the count lock and the host opening or closing is done.
> + *
> + * setup and get_config only take the count lock.  setup modifies the
> + * device configuration only if the open count is zero.  Arbitrarily
> + * long blocking of setup doesn't happen because something would have to be
> + * waiting for an open to happen.  However, a second open with
> + * tty->count == 1 can't happen, and a close can't happen until the open
> + * had finished.
> + *
> + * We can't maintain our own count here because the tty layer doesn't
> + * match opens and closes.  It will call close if an open failed, and
> + * a tty hangup will result in excess closes.  So, we rely on
> + * tty->count instead.  It is one on both the first open and last close.
> + */
> +
>  int line_open(struct line *lines, struct tty_struct *tty)
>  {
> -	struct line *line;
> +	struct line *line = &lines[tty->index];
>  	int err = -ENODEV;
>  
> -	line = &lines[tty->index];
> -	tty->driver_data = line;
> +	spin_lock(&line->count_lock);
> +	if(!line->valid)

	if (
	(several of these)

> +		goto out_unlock;
> +
> +	err = 0;
> +	if(tty->count > 1)
> +		goto out_unlock;
>  
...

> +	if(!line->sigio){
> +		chan_enable_winch(&line->chan_list, tty);
> +		line->sigio = 1;
>  	}
>  
> -	err = 0;
>  }

> @@ -466,25 +507,38 @@ void line_close(struct tty_struct *tty, 
> +	if(line == NULL)
> +		return;

again.

>  
>  	/* We ignore the error anyway! */
>  	flush_buffer(line);
>  
> -	if(tty->count == 1){
> -		line->tty = NULL;
> -		tty->driver_data = NULL;
> -
> -		if(line->sigio){
> -			unregister_winch(tty);
> -			line->sigio = 0;
> -		}
> +	spin_lock(&line->count_lock);
> +	if(!line->valid)
> +		goto out_unlock;
> +
> +	if(tty->count > 1)
> +		goto out_unlock;
> +

ditto.  tritto.

> +	mutex_lock(&line->open_mutex);
> +	spin_unlock(&line->count_lock);
> +
> +	line->tty = NULL;
> +	tty->driver_data = NULL;
> +
> +	if(line->sigio){

again.

> +		unregister_winch(tty);
> +		line->sigio = 0;
>          }
>  
> -	spin_unlock_irq(&line->lock);
> +	mutex_unlock(&line->open_mutex);
> +	return;
> +
> +out_unlock:
> +	spin_unlock(&line->count_lock);
>  }
>  
>  void close_lines(struct line *lines, int nlines)


---
~Randy
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

[Index of Archives]     [Kernel Newbies]     [Netfilter]     [Bugtraq]     [Photo]     [Stuff]     [Gimp]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Video 4 Linux]     [Linux for the blind]     [Linux Resources]
  Powered by Linux