Re: [patch/rfc 1/4] GPIO implementation framework

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Nov 14, 2007 3:06 AM, David Brownell <[email protected]> wrote:
> On Monday 12 November 2007, eric miao wrote:
> > Hi David,
> >
> > I hope I was not late giving my humble feedback on this framework :-)
> >
> > Can we use "per gpio based" structure instead of "per gpio_chip" based one,
> > just like what the generic IRQ layer is doing nowadays?
>
> We "can" do most anything.  What would that improve though?
>
> Each irq_chip handles multiple IRQs ... just like this patch
> has each gpio_chip handling multiple GPIOs.  (Not that I think
> GPIO code should closely model IRQ code; they need to address
> very different problems.)
>
> I can't tell what you intend to suggest as a "per-GPIO" data
> structure; since I can think of at least three different ways
> to do such a thing, you should be more concrete.  I'd think it
> should be in *addition* to a gpio_chip structure though.
>

exactly, I'd say a "struct gpio_desc" as


>
> > So that
> >
> > a. you don't have to declare per gpio_chip "can_sleep", "is_out" and
> > "requested".
> > Those will be just bits of properties of a single GPIO.
>
> The can_sleep value is a per-controller thing.  The other bits are
> indeed per-GPIO.
>
> So do you mean a structure with two bits, plus a pointer to a
> gpio_chip, plus likely other stuff (what?) to make it work?
> What would the hot-path costs be (for getting/setting values of
> an on-chip GPIO)?
>

the cost is as trivial as the current one.

>
> > b. and furthur more, one can avoid the use of ARCH_GPIOS_PER_CHIP, which
> > leads to many holes
>
> Why should holes (in the GPIO number sequence) be a problem?  In
> this code, they don't cost much space at all.  They'd cost more
> if there were a per-GPIO structure though...
>

well, I don't think holes are problems, but think about the restriction
ARCH_GPIOS_PER_CHIP enforces the numbering of GPIOs, don't
you think we need a more flexible numbering scheme, so one can
later adjust gpio_to_irq() and irq_to_gpio() easily??

> The only downside of GPIOS_PER_CHIP that I know of right now
> is that it complicates mixing gpio bank sizes; it's a ceiling,
> some controllers would allocate more than they need.  The
> upside of that is efficiency, and a closer match to how
> underlying hardware works.
>
> Of course, GPIOS_PER_CHIP *could* be decoupled from how the
> table of gpio_chip pointers is managed.  If the table were to
> group GPIOs in units of 8, a gpio_chip with 32 GPIOs could
> take four adjacent entries while an 8-bit GPIO expander could
> take just one.  That'd be a very easy patch, supporting a more
> dense allocation of GPIO numbers... although it would increase
> static memory consumption by typically NR_GPIOS/4 pointers.
>
>
> > c. gpio_to_chip() will be made easy and straight forward
>
> I'd say "return chips[gpio / ARCH_GPIOS_PER_CHIP]" already meets
> both criteria!
>
> There's also "efficient" to consider; this way doesn't cost much
> memory or add levels of indirection (compared to most platforms,
> which already use a similar array).
>
>
> > d. granularity of spin_lock()/_unlock() can be made small
> > (per GPIO instead of per gpio_chip)
>
> Why would per-GPIO locking be needed though?  Look again...
>
> The locking is there fundamentally because gpio_chip structures
> may need to be unregistered; that's not a per-gpio issue.
> Even when a gpio is marked in chip->requested under that lock,
> that's part of ensuring that the unregistration is prevented so
> long as the GPIO is in active use.
>
> Plus, fine grained locking is rarely a good idea; it normally
> increases locking overhead by involving multiple locks.  Only
> add extra locks if a single lock sees too much contention; and
> even then, only if that contention can't be removed by using a
> smarter design.
>

well, I don't see much benefit for now, either. But binding/unbinding
the gpio_chip to the gpio_desc could possibly be made locking free
(RCU maybe), since removing a gpio_chip is really *rare*.

> - Dave
>
>
>
>
> > What do you think?
> >
> > - eric
> >
> > On Nov 6, 2007 5:05 AM, David Brownell <[email protected]> wrote:
> > > On Monday 29 October 2007, David Brownell wrote:
> > > >
> > > > Provides new implementation infrastructure that platforms may choose to use
> > > > when implementing the GPIO programming interface. Platforms can update their
> > > > GPIO support to use this. The downside is slower access to non-inlined GPIOs;
> > > > rarely a problem except when bitbanging some protocol.
> > >
> > > I was asked just what that overhead *is* ... and it surprised me.
> > > A summary of the results is appended to this note.
> > >
> > > Fortuntely it turns out those problems all go away if the gpiolib
> > > code uses a *raw* spinlock to guard its table lookups.  With a raw
> > > spinlock, any performance impact of gpiolib seems to be well under
> > > a microsecond in this bitbang context (and not objectionable).
> > > Preempt became free; enabling debug options had only a minor cost.
> > >
> > > That's as it should be, since the only substantive changes were to
> > > grab and release a lock, do one table lookup a bit differently, and
> > > add one indirection function call ... changes which should not have
> > > any visible performance impact on per-bit codepaths, and one might
> > > expect to cost on the order of one dozen instructions.
> > >
> > >
> > > So the next version of this code will include a few minor bugfixes,
> > > and will also use a raw spinlock to protect that table.  A raw lock
> > > seems appropriate there in any case, since non-sleeping GPIOs should
> > > be accessible from hardirq contexts even on RT kernels.
> > >
> > > If anyone has any strong arguments against using a raw spinlock
> > > to protect that table, it'd be nice to know them sooner rather
> > > than later.
> > >
> > > - Dave
> > >
> >
>

-- 
Cheers
- eric
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

[Index of Archives]     [Kernel Newbies]     [Netfilter]     [Bugtraq]     [Photo]     [Stuff]     [Gimp]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Video 4 Linux]     [Linux for the blind]     [Linux Resources]
  Powered by Linux