The ADC will do nothing until a trigger command written into its register. It then makes a sampling, and within a usec it puts the digital data in another latched register mapped into IO address for the program to read. This is called the single operation, in other words, no trigger no sampling. Though it is capable to have a maximum sample rate of 10K per second, but it all depends on how fast and accurate the program can do. Assume this is the only thing the SBC is doing. Do you think the logic described below will work? 1. The application (user space) initializes ADC 2. The application issues a read() to the driver for data 3. The driver does not return the read() until finishing 1000 loops for 1000 samples 4. Within each loop, the driver issues a trigger command, waits for 1 usec, reads the result, puts it in a buffer and waits for another 99 (may be) usec 5. Returned from the read(), the application receives 1000 data sample, it then issues another read() immediately. So it issues 10 read() per second Does this operation impact the kernel operation, such as timeofday or other interrupt driven events? John -----Original Message----- From: fedora-list-bounces@xxxxxxxxxx [mailto:fedora-list-bounces@xxxxxxxxxx] On Behalf Of Mike McCarty Sent: Thursday, January 19, 2006 5:52 PM To: For users of Fedora Core releases Subject: Re: high resolution timer Gu, John A. (US SSA) wrote: > I have a ADC (Analog-to-Digital Converter, Maxim 197) integrated with > a SBC (Single Board Computer) running the Linux Kernel. The ADC has > its sample rate of 10K per second (100 usec interval). The ADC can > only operate on a single trigger and read fashion. Creating a driver > is a good way to handle it. But I don't see any proper timer or delay > logic available for my situation. By using udelay() will block all the > other processes. > > As mentioned from you guys, getting some RT-patch may be the only way > out. Ok. But how often must you read it? The best way may simply be to trigger a read in a device driver, then just hang and wait for 100us or until the sample is ready. If your time budget is not too stringent, this may work ok. If you need to read the A/D say every 100ms, then eating 100us/100ms is not too much load. Are there any other devices which cannot accept a 100us interrupt latency? IOW, will you blow some other time constraint, even though the CPU load may not be too great using this technique? Mike -- p="p=%c%s%c;main(){printf(p,34,p,34);}";main(){printf(p,34,p,34);} This message made from 100% recycled bits. You have found the bank of Larn. I can explain it for you, but I can't understand it for you. I speak only for myself, and I am unanimous in that! -- fedora-list mailing list fedora-list@xxxxxxxxxx To unsubscribe: https://www.redhat.com/mailman/listinfo/fedora-list