> However, it would be nice if someone compared the performance of both
> variants.
> > * original code could be implemented in a single DMA chain on
> > at least two systems I happen to have handy ... one IRQ.
> >
> > * this version requires two separate chains, with an intervening
> > task schedule. (More than twice the cost.)
They differ by at least the cost of an extra IRQ and task schedule, so
the performance delta would be load-dependant. In a real-time setup
where that task was given time slices at fixed intervals, that might
mean the two-request version takes twice as long. ;)
> Whoops. Lemme express my thoughts here.
> First, the DMA controller that can handle chained requests which are
> working with different devices (i.e. write then read does that - first
> mem2per, then per2mem) are *very* rare thing.
The way they usually work is allocate two DMA channels to the device,
one for read and one for write. The controllers just makes sure there's
one byte read for every one written, and the DMA channels just stream
along with them. (Possibly one channel talks to a bitbucket.)
AT91, OMAP, and PXA hardware all claim hardware support sufficient to
do that. (Errata aside.) That's not even rare hardware, much less
"very rare"; they're stocked at DigiKey, ready to throw into boards.
> > - Chipselect works differently in your code. You're giving one
> > driver control over all the devices sharing a controller, by
> > blocking requests going to other devices until your driver yields
> > the chipselect.
> >
> >
> The controllers we were working with are _not_ able to handle
> simultaneous requests to different devices on the same bus.
Of course not; SPI doesn't work that way. But my point was that your
changes show a controller driver that's clearly required to "lock" the
bus, through that chipselect, between requests ... which is not only
antithetical to the "requests are asynchronous" model that had earlier
been agreed on, but also creates problems of its own:
> > This seems like a bad idea even if you ignore
> > how it produces priority inversions in your framework. :)
So consider two different tasks accessing devices on the same SPI bus.
One is lower priority, currently waiting for an SPI request to complete.
A request that has your magic "leave me owning chipselect/bus after
this request flag" ... because the first thing it's going to do when
it returns is start another transfer. (And in the case of the MTD driver,
that transfer could already have been queued, removing the issue as
well as the need for that flag.)
Now the high priority task issues a request to the other device on
that same SPI bus. This means that *two* other tasks ought to be
temporarily operating with that higher priority: whatever task
you've allocated to manage the I/O queue on that bus (plus maybe
an IRQ task with RT_PREEMPT); and the task that's waiting for that
transfer to complete. Inversions ...
- Dave
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
[Index of Archives]
[Kernel Newbies]
[Netfilter]
[Bugtraq]
[Photo]
[Stuff]
[Gimp]
[Yosemite News]
[MIPS Linux]
[ARM Linux]
[Linux Security]
[Linux RAID]
[Video 4 Linux]
[Linux for the blind]
[Linux Resources]