Re: For hardware engineer types re: sound controlers and/or codec chips ??

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks Les;

On Sun, 2009-09-06 at 17:02 -0700, Les wrote:
> On Sat, 2009-09-05 at 13:36 -0400, William Case wrote:
> > Hi;
> > 

> > 
> > Is my assumption about temporary, perhaps 1-2 ticks, storage accurate?
> > 
> > Where is this data stored?
> > 
> > Does the sound card itself have some small capacity for memory?  
> > -- SRAM or DRAM?
> > 
> > 
> Hi, Bill,
> 	Sound, like most everything else electronic is becoming more and more
> integrated.  The chip processes today are quite a bit different today
> than they were even 20 months ago.  Anything I might say is probably
> different from the actual truth anyway.
> 
> 	But here is an attempt at a high level overview...
> 
As I have said before, it is not essential I get the latest hardware
construction or software configuration.  I am not building a computer
from scratch in my garage.  But it is nice to know various nodes or
resting points of a system so that I can follow the the problems that
are being solved or the improvements being made.


> 	Sound is digitized by an A/D.  that is it comes in, is buffered to
> isolate the source from the actions of the A/D, is selected or mixed
> with other sources (either before the A/D or afterwards mathematically)
> and then encoded.  The encoding is dependent upon the standard(s)
> chosen.  In the past the data was simply passed to the cpu and the
> encoding or decoding took place on the main processor.  Today many of
> the sound chips and advanced sound systems (USB or plug in board or
> chipset) have encoder/decoder capable processors built in, and many of
> them support programming to meet new standards, so that as MPEG2 becomes
> MPG3 or 4, the software can be updated to manage that change.  This
> relieves the main processor of that task, so it can concentrate on other
> things.  At the same time, the video boards are also becoming more
> powerful with array processing built in to do all kinds of 3d graphing,
> and there are more powerful utilities being developed all the time
> (facial recognition, tracking, movement detection, environment
> measurements etc.)  And again the on board processing relieves the main
> processor of much of the processing burden.  However the data is
> handled, it is eventually sent to the main processor for storage or
> transmission, and that part may be handled by DMA, although using DMA
> for an Audio process is somewhat overkill, since DMA is in essence a
> high speed process.  It depends upon the demands put on the hardware
> whether slow speed processes like audio are handled by DMA or polled
> buffering, but in any event, the mixing of multiple channels, the
> processing for Dolby, or other encoding like THX, or even simulated
> projected sound as used by some forms of speakers with phasing
> capabilities, means that the processing overhead is increasing, and the
> demand for integrated processing increases with that.  Also board real
> estate is more expensive than IC real estate, so economies of scale
> dictate that the chips become more and more powerful.  
> 
In an earlier post John Wendel suggested.
http://www.intel.com/design/chipsets/hdaudio.htm

It has most of what I want, it seems.  I have perused the documents and
found stuff that seems to answer my posted questions but I have not yet
really dug into them.  I will.  

I am referring to:
Intel® I/O Controller Hub 7 (ICH7)/Intel® High Definition
Audio/AC’97/Programmer’s Reference Manual (PRM), or the Intel® I/O
Controller Hub 7 (ICH7) Family Datasheet and one other.

In there is mention of several registers.  I assume those registers are
in the hub and that is where the sound data is temporarily stored
whether it goes through DMA or not.  If I am wrong just let me know.  A
long explanation is not needed at this point until I actually dig into
the manuals and try to understand them myself.

> The enemy of good sound is noise, and PC's with their switching power
> supplies, digital I/O and something called ground bounce (where the
> return path gets periodically overloaded with current, minimizing the
> difference between the power rail and ground), all contribute noise.  So
> lots of systems now use USB or some other means to isolate the audio
> system from the processing ground. 
> 
> 	And then a new problem crops up.  If the audio system does the
> processing, how can it handle the processing noise it makes itself?
> There are lots of methods, but one is to make the processing synchronous
> with the sampling.  This means that the noise occurs at the same time as
> the ADC or DAC is switching to the next sample, so the noise is not
> captured.  Other techniques involved here all have to do with making the
> noise "common mode" so that the audio system is balanced against ground,
> and differential.  When the ground noise goes into such an amplifier, it
> is canceled by its own image of the wrong polarity. (this is a
> simplified explanation).
> 

I am always fascinated by the ingenious solutions engineers develop.  It
just demonstrates that in spite of the innovation, how much of computing
is built on past endeavours.

> 	So logically the Main processor will set up a process on the system
> that will send and receive encoded digital streams and tell the audio
> circuitry what is coming, how it is coded, how large the blocks are and
> so forth for sound output, whose size and transfer method depend upon
> the current expected standard and the chipset in use.  The audio
> circuitry will process the digital input to sound output for speakers
> etc.  
> 

I never forget that computing, at its most basic level, is no more than
opening and closing a complex set of switches in order to get electrical
power to flow through one system of hardware or another.

> The corresponding operation for audio in will setup the processor on the
> audio board for the signals to process, the method to use and size of
> the data blocks to transfer. 

> The audio input from the microphones or
> other analog inputs are then sampled into digital data appropriately
> encoded to pass back to the computer where a waiting process will
> dispose of the data in the appropriate manner for the application.
> 

Those two paragraphs are the crux of it.  If I can get those essential
elements working (I mean working in my mind through logical analysis and
visualization), adding all the side-bars, caveats and exceptions just
becomes grunt work.

> I hope that helps.
> 
It does indeed.

> Regards,
> Les H
> 
-- 
Regards Bill
Fedora 11, Gnome 2.26.3
Evo.2.26.3, Emacs 23.1.1

-- 
fedora-list mailing list
fedora-list@xxxxxxxxxx
To unsubscribe: https://www.redhat.com/mailman/listinfo/fedora-list
Guidelines: http://fedoraproject.org/wiki/Communicate/MailingListGuidelines

[Index of Archives]     [Current Fedora Users]     [Fedora Desktop]     [Fedora SELinux]     [Yosemite News]     [Yosemite Photos]     [KDE Users]     [Fedora Tools]     [Fedora Docs]

  Powered by Linux