RE: high resolution timer

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, 2006-01-20 at 09:07 -0500, Gu, John A. (US SSA) wrote:
> The ADC will do nothing until a trigger command written into its
> register. It then makes a sampling, and within a usec it puts the
> digital data in another latched register mapped into IO address for the
> program to read. This is called the single operation, in other words, no
> trigger no sampling. Though it is capable to have a maximum sample rate
> of 10K per second, but it all depends on how fast and accurate the
> program can do.
> 
> Assume this is the only thing the SBC is doing. Do you think the logic
> described below will work?
> 
> 1. The application (user space) initializes ADC
> 2. The application issues a read() to the driver for data
> 3. The driver does not return the read() until finishing 1000 loops for
> 1000 samples
> 4. Within each loop, the driver issues a trigger command, waits for 1
> usec,
>    reads the result, puts it in a buffer and waits for another 99 (may
> be) usec
> 5. Returned from the read(), the application receives 1000 data sample,
> it then issues
>    another read() immediately. So it issues 10 read() per second
> 
> Does this operation impact the kernel operation, such as timeofday or
> other interrupt driven events?

First of all, when you need to trigger your hardware 10000 per second to
get a steady stream of samples, you have pretty poor hardware. 

Than your example has some serious flaws.  For example what happens
between  step 4 and 5 ? Since your "userspace program" just used up
100ms it is highly likely it will have to make place for other usespace
programs, meaning it will take way more than 100us to restart the next
read. But even if it would start the next read in time it means you will
freeze your complete machine, since because it has no time for anything
but the busywait loop.

If you need a guaranteed stream of 10k samples/s you will not be able to
do it with the standard kernel. Your only options are to use one of the
realtime extensions, or to use hardware that can buffer the samples so
you only have to read, for example, a block of 1000 samples at the
time. 

With RTAI you could for example create a periodic task that runs every
100us (this is possible with a jitter of about 10-30 usec) . In that
task you only do trigger, read, store in FIFO. A userspace application
can than read from that FIFO. This still will put a high load on your
machine, but it has been done before.  Trying to do something like that
with the standard kernel will _NOT_ work. And even with RTAI it will be
hard to get it right, because you still have a jitter of about 20us on
your sample clock, and so the sample time is not very accurate (unless
it is supported by hardware in some way).

You really should take this discussion to the RTAI or rtlinux mailing
list, since the people there actually build systems like you want,
unlike the people on this list that "just" use Fedora.

- Erwin



[Index of Archives]     [Current Fedora Users]     [Fedora Desktop]     [Fedora SELinux]     [Yosemite News]     [Yosemite Photos]     [KDE Users]     [Fedora Tools]     [Fedora Docs]

  Powered by Linux