Mark Lord wrote:
I don't believe the speed of the machine has much to do with it,
as IDE PIO is always at pretty much the same speed (or slower)
regardless of the CPU speed.
Best case is about .120 usec per 16-bit word, but that doesn't often pan
out
in practice. More typical is something closer to 1 usec per 16-bit word.
So, for multcount=16 (very common), best case is 16 * 256 * .120 = 491
usec,
plus extra overhead for reading the IDE status register (another usec or
so),
and other stuff. Figure maybe 500usec total per interrupt for multcount=16
in the best case, or 4000usec in the worst case.
At 115200bps, we get a byte every 86 usec or so. Assuming the UART FIFO
is set to interrupt (warn) us at 12/16 full, we have 4*86 = 344 usec to
respond and de-assert RTS. Less than that in practice.
Conclusion: using IDE multisector PIO is not a good idea with high speed
serial transfers happening, since we cannot respond quickly enough.
It might be possible to set the buffer underrun threshold lower in the
UART (?).
All that said, I doubt that his system is using IDE PIO in the first place.
Dunno how long IDE DMA interrupts take, but it's probably in the 20-50
usec range.
I think that PIO transfers only have to be done with interrupts disabled
on really old, evil controllers (without unmask set). I don't think
libata ever disables interrupts during transfers(?)
--
Robert Hancock Saskatoon, SK, Canada
To email, remove "nospam" from hancockr@nospamshaw.ca
Home Page: http://www.roberthancock.com/
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
[Index of Archives]
[Kernel Newbies]
[Netfilter]
[Bugtraq]
[Photo]
[Stuff]
[Gimp]
[Yosemite News]
[MIPS Linux]
[ARM Linux]
[Linux Security]
[Linux RAID]
[Video 4 Linux]
[Linux for the blind]
[Linux Resources]