Re: [PATCH 2.6-git] MTD/SPI dataflash driver

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thursday 01 December 2005 8:18 am, Vitaly Wool wrote:
 
> It took about fifteen minutes of mindwork to port the MTD dataflash driver
> posted by David Brownell 11/23 to our SPI core from David's,

Interesting and informative; +1!

Did you have a chance to test this?  There are some post-2.6.15-rc3-mm1
updates to this code, mostly to catch startup fauults better but also to
hotplug reasonably.  The glitches I saw may as easily be JFFS2 integration
issues for the DataFlash code as bitbang adapter problems.  (I think you
started more or less from what's in rc3-mm1.)


> reducing both 
> the size of the source code and the footprint of the object code.
> I'd also like to mention that now it looks significatnly easier to understand
> due to no more use of complicated SPI message structures. The number of variables
> used was also decreased

I think that's all the same issue.  Other than "spi_driver" replacing
"device_driver" (I'd like to see a patch doing that to rc3-mm1), the
main changes seem to be:

  - Move from original atomic requests like this

	{ write command, read response }

    over to two separate requests

	{ write command, and leave CS active }
	{ read response, and leave CS off }

 - Lower the per-request performance ceiling on this driver

      * original code could be implemented in a single DMA chain on
        at least two systems I happen to have handy ... one IRQ.

      * this version requires two separate chains, with an intervening
        task schedule.  (More than twice the cost.)

      * in your framework I still can't be _sure_ it never does memcpy
        for those buffers (the last version I looked at did so).  the
        original code just used normal DMA models, so it "obviously"
        doesn't risk memcpy.

 - Add fault handling problems ... probably an oversight, you call
   routines and don't check for fault reports.  Though I'm not clear
   how the faults between the two requests would be handled (in your
   framework) with any robustness... doing this right could easily
   cost all the codespace you've conserved.

 - I might agree with this being "easier to understand" code, though
   it's debatable.  (The device sees one transaction, why should the
   driver writer have to split it up into two?)  But that doesn't
   matter much:  filesystems are better with "faster to run" code.

 - Chipselect works differently in your code.  You're giving one
   driver control over all the devices sharing a controller, by
   blocking requests going to other devices until your driver yields
   the chipselect.  This seems like a bad idea even if you ignore
   how it produces priority inversions in your framework.  :)

So the way I see it, this is a good example of why my original I/O model
is better.  It provides _flexibility_ in the API so that some drivers
can be really smart, if they want to.  You haven't liked the consequence
of that though:  controller drivers being able to choose how they
represent and manage I/O queues, rather than doing that in your "core".

- Dave



-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

[Index of Archives]     [Kernel Newbies]     [Netfilter]     [Bugtraq]     [Photo]     [Stuff]     [Gimp]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Video 4 Linux]     [Linux for the blind]     [Linux Resources]
  Powered by Linux