Re: Advice to an audiophobe ??

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, 2008-12-26 at 01:04 +1100, David Timms wrote:
> William Case wrote:
> > On Wed, 2008-12-24 at 08:30 +1030, Tim wrote:
> >> On Tue, 2008-12-23 at 10:55 -0500, William Case wrote:
> ...
> > 1. Gives me a whole range of adjustments for different channels.  (I
> > assume channels means different sources  e.g. Master, Headphone, PCM etc.).
> Just to clarify, a source device generates an audio signal( line in, PCM 
> (pulse code modulation = wav uncompressed audio), that might then get 
> processed (eg volume control, master, headphone), and then sent to a 
> destination (often an output device like a speaker connector or 
> headphone jack).
> 
> Because a typical soundcard has an internal hardware mixer, it can 
> usually mix together various inputs (sources) like CD input, mic input 
> and recorded audio signals, and produce a single output signal (mostly 
> in stereo=2 channels). When mixing together externally received signals, 
> no main CPU processing power is used, unless you are trying to record to 
> hard disk etc.
> 
> > 2. gives me two choices and 
> > 3., 4., 5. gives me only Master.  
> > Which should I choose and why?
> capture means recording - usually from a line in (eg from a vcr or mp3 
> player etc), or from a microphone. So exclude those from your choice.
> 
> > If I should be using HDA NVida (Alsa-mixer), why do I have PulseAudio
> > options?
> Consider pulse audio to be a real-time digital mixer and volume control, 
> where the audio calculations are performed inside your main CPU. In the 
> default setup, once pulseaudio has done it's processing, it passes the 
> result to the alsa driver which outputs the audio data to the soundcard. 
> The soundcard turns the digital audio data into analog audio signals for 
>   use with amplifier, speakers, or headphones.
> 
> Pulseaudio also has enhanced capabilities like remembering that when you 
> playback with xmms that you like to output via your amplifiers and 
> speakers, but when you are viewing a flash video, to playback into your 
> headphones instead, at a different level. Another capability let's you 
> choose the destination playback device while the material is actually 
> being played. A third capability let's the output go to an audio device 
> on another machine. Obviously, this is a bit trickier to set up.
> 
> ...
> >> These individual mixer input controls should normally be left off if you
> >> never use them, as they can each introduce noise (hiss, beeps and
> >> burbles, etc.) to the system.  
> > I will turn them off except for Master and Front.  I will experiment
> > with PC Speaker.  Of course these are only available to me if I use the
> > default alsa mixer setting.
> It's not a one or other setting, both parts will still be involved; 
> pulseaudio will process, mix, and attenuate sound signals, whereas alsa 
> will drive the physical hardware. The setting you are seeing lets you 
> decide whether to control the physical driver volume levels or the 
> software generated pulseaudio volume controls. If you mute or turn the 
> alsa master way down, it wont matter how high you turn the pulseaudio 
> mixer, since the alsamixer comes after the pulseaudio one in the audio 
> chain. (also true for the reverse).
> 
> If you play back a loud audio file, and turn both the pulseaudio source 
>   and master up full. Then change to the alsa setting. You can then use 
> the also setting to set up an absolute maximum level that you would want 
> to hear, by adjusting the master. Then you could go back to the 
> pulseaudio setting to adjust the playback to a comfortable setting, and 
> from then on only use the pulseaudio setting.
> 
> ...
> >>>       * How is sound related to video ?
> > 
> >> Sound is the sound, video is the picture...  The question is too vague
> >> to be answerable.
> In digital format, sound and vision are both represented with digital 
> 1's and 0's. With all video and audio file types, there is a packing 
> together of the audio and video information into the one file. The 
> multiplexed file provides information about when to playback each frame 
> of video in relation to the audio in the file. For example, an mpeg2 
> (dvd) file might have two frames of video, then 2 of audio, then 1 of 
> video, two audio in an order to achieve a consistent throughput of audio 
> and video data.
> 
> >>>       * Why are there so many files associated with producing sound?
> In digital audio, the most basic file type is waveform (.wav), where 
> each momentary value of audio is stored, on a 1 for 1 basis. Experiments 
> and calculations can show us that for something we store as quality 
> musical recording we need to sample that momentary value at 44kHz (times 
> per second) or higher so as not to disrupt our digital recording with 
> audio aliases. Since we also seem to enjoy the spatial enhancement 
> produced by stereo or more channels, the file needs to store both left 
> and right information. Finally, we found that if we only store the 
> digital value using a small no of bits per sample, when played back we 
> hear a harsh, chunky sound, rather than the CD like quality of using 16 
> (or more) bits per sample. The catch with all that is it takes up a lot 
> space.
> 
> To solve space issues (less a problem now that storage space costs a lot 
> less), compression schemes were developed. Most take advantage of 
> reducing the number of channels eg to mono, reducing the sampling rate, 
> or the number quantizing levels (bits/sample); but this is done in 
> context of the type of audio being compressed - eg human voices are 
> typically of lower frequency, and can sampled at a slower rate, and with 
> less levels.
> 
> The biggest jump in compression was with psychoacoustic modelling, where 
> it was found that in a complex sound, a listener does not notice that 
> certain frequency (pitch) sounds become inaudible (or masked) by other 
> sounds.
> 
> The reason there is so many formats, is because developers were 
> essentially competing to produce more highly compressed audio files, 
> without noticeable change in quality, when using a certain type of 
> audio, over a certain communication medium. Eg: when the fastest home 
> internet connections were slow modems, compression made it possible to 
> transmit voice signals over your internet connection. If you tried to 
> transmit music of higher quality that voice, you would have large audio 
> distortions that made it difficult to hear the original material.
> 
> You might like to play with the audio editor program audacity (perhaps 
> from rpmfusion if you want to be able to import and save in certain 
> compressed formats (mp3)). It shows you a graphical representation of 
> the audio file, and eg lets you choose a zoom, start and stop position, 
> and just play back small parts of a file, so that you can work out what 
> the sound "looks" like to a computer.
> 
> Hope that helps a bit more ;-)
> DaveT.
> 
Hi, Dave,
	I work in the IC Test industry, and that is the clearest non-math
explanation I have ever read.  I also teach applied DSP (fourier
analysis, time series analysis and uses of IFTs.)  I have endeavored to
explain to many and varied audiences these effects, but never came up
with such clarity.  

	May I quote you?

Thank you,
Les Howell

-- 
fedora-list mailing list
fedora-list@xxxxxxxxxx
To unsubscribe: https://www.redhat.com/mailman/listinfo/fedora-list
Guidelines: http://fedoraproject.org/wiki/Communicate/MailingListGuidelines

[Index of Archives]     [Current Fedora Users]     [Fedora Desktop]     [Fedora SELinux]     [Yosemite News]     [Yosemite Photos]     [KDE Users]     [Fedora Tools]     [Fedora Docs]

  Powered by Linux