David Brownell wrote:
Of course we want to use scatter-gather lists.
The only way "of course" applies is if you're accepting requests
from the block layer, which talks in terms of "struct scatterlist".
In my investigations of SPI, I don't happen to have come across any
SPI slave device that would naturally be handled as a block device.
There's lots of flash (and dataflash); that's MTD, not block.
What about SD controllers on SPI bus? I did work with 2. ;-)
The DMA controller
mentioned above can handle only 0xFFF transfer units at a transfer so we
have to split the large transfers into SG lists.
Odd, I've seen plenty other drivers that just segment large buffers
into multiple DMA transfers ... without wanting "struct scatterlist".
I didn't claim that scatterlist is to be used. We use 'hardware-driven'
structures for sg lists.
- More often they just break big buffers into lots of little
transfers. Just like PIO, but faster. (And in fact, they may
need to prime the pump with some PIO to align the buffer.)
That won't work in some cases, as SPI might generate additional clock
cycles after each transfer which won't happen un case of real sg transfer.
- Sometimes they just reject segments that are too large to
handle cleanly at a low level, and require higher level code
to provide more byte-sized blocks of I/O.
It's possible in some cases but won't work in other. See above.
If "now" _were_ the point we need to handle scatterlists, I've shown
a nice efficient way to handle them, already well proven in the context
of another serial bus protocol (USB).
Moreover, that looks like it may imply redundant data copying.
Absolutely not. Everything was aimed at zero-copy I/O; why do
you think I carefully described "DMA mapping" everywhere, rather
than "memcpy"?
I'm afraid that copying may be implicit.
Can you please elaborate what you meant by 'readiness to accept DMA
addresses' for the controller drivers?
Go look at the parts of the USB stack I mentioned. That's what I mean.
- In the one case, DMA-aware controller drivers look at each buffer
to determine whether they have to manage the mappings themselves.
If the caller provided the DMA address, they won't set up mappings.
- In the other case, they always expect their caller to have set
up the DMA mappings. (Where "caller" is infrastructure code,
not the actual driver issuing the I/O request.)
The guts of such drivers would only talk in terms of DMA; the way those
cases differ is how the driver entry/exit points ensure that can be done.
As far as I see it now, the whole thing looks wrong. The thing that we
suggest (i. e. abstract handles for memory allocation set to kmalloc by
default) is looking far better IMHO and doesn't require any flags which
usage increases uncertainty in the core.
You are conflating memory allocation with DMA mapping. Those notions
are quite distinct, except for dma_alloc_coherent() where one operation
does both.
The normal goal for drivers is to accept buffers allocated from anywhere
that Documentation/DMA-mapping.txt describes as being DMA-safe ... and
less often, message passing frameworks will do what USB does and accept
DMA addresses rather than CPU addresses.
As for our core implementation it's totally agnostic about what kind of
addresses is used and what way it should be handled in; it's all left
for controller driver to decide. I still think it's far better approach
than lotsa pointers and flags.
Vitaly
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
[Index of Archives]
[Kernel Newbies]
[Netfilter]
[Bugtraq]
[Photo]
[Stuff]
[Gimp]
[Yosemite News]
[MIPS Linux]
[ARM Linux]
[Linux Security]
[Linux RAID]
[Video 4 Linux]
[Linux for the blind]
[Linux Resources]