Page 5 of 5 FirstFirst ... 3 4 5
Results 41 to 42 of 42

Thread: Why does it sound different on the Radio...

  1. #41
    GAStronomist Simon Barden's Avatar
    Join Date
    Oct 2016
    Location
    Reading, UK
    Posts
    10,547
    Quote Originally Posted by Marcel View Post
    In my mind I see it this way. On a 440Hz note and with 44100 per second sample rate there will be close to 100 individual samples available to rebuild a single cycle of the original "sound" at the loud speaker, but on the 88th note of a piano at 4186.1Hz there will only be about 10 individual samples to rebuild all the complexities of that specific piano note. Ten samples is still quite a lot and will be fairly representative but it will not be 100% accurate. Thankfully though we don't have dust and static to contend with.
    As Doc Nomis said, the Nyquist Theorem states that if the sampling rate is more than double the maximum frequency you are trying to sample, then you can reproduce that sound perfectly. It doesn't matter if the frequency is 20 Hz or 20,000Hz, if you are sampling at 44.1kHz, then both frequencies get reproduced accurately. Each sample represents a point on a curve (not the height of a step) and the D/A converter recreates the curve as a continuous voltage wave.

    There's a great video on this web page that explains almost all of the misconceptions about digital audio. https://xiph.org/video/vid2.shtml

  2. #42
    Mentor Marcel's Avatar
    Join Date
    Apr 2017
    Location
    Bouldercombe Qld.
    Posts
    1,168
    Quote Originally Posted by Simon Barden View Post
    As Doc Nomis said, the Nyquist Theorem states that if the sampling rate is more than double the maximum frequency you are trying to sample, then you can reproduce that sound perfectly. It doesn't matter if the frequency is 20 Hz or 20,000Hz, if you are sampling at 44.1kHz, then both frequencies get reproduced accurately. Each sample represents a point on a curve (not the height of a step) and the D/A converter recreates the curve as a continuous voltage wave.

    There's a great video on this web page that explains almost all of the misconceptions about digital audio. https://xiph.org/video/vid2.shtml
    Yeah.... But !...

    I 100% agree with the theory, The theory that for any given series of sampled points there is only one mathematically correct output waveform that any reasonable D to A converter should deliver. However in the real world we need to deal with the hardware that does have its limitations.

    When testing with pure tones the limitations are almost non-existent. The prediction of the next sample point and the slope of the voltage change are very close to ideal so a smooth transition from one sample to the next is easily calculable, and thus noise outside of the frequencies of interest are just as easily minimised. Even slightly more complex two tone or three tone tests yield similar high quality results.

    A 'foot to the floor' or a '100% difficulty' test that has not often been demonstrated is one where a wide band white or pink noise is passed through the A to D to A process and then compared to the identical delayed source signal on a scope with the inputs in differential mode. The line on differential mode scope screen will display all differences between the original signal and the ADA signal, and those differences will appear as noise on the line on the screen. The less noise the better the ADA process. Sadly there will always be some noise.

    What the noise represents are the differences between the original signal and what is presented by the ADA process. How this noise affects our human perception of any given ADA signal when compared to a pure original analogue signal is a matter or the matter open for debate...

    If it's a digital 99.999% or analogue 100% accurate there are 'golden ears' that claim they can hear it, however I suspect most are like me and hear like me so I doubt most would be unable to tell the difference.

    A case in point - There is an Israeli communications company that wanted to cram as many phone users onto their E1 (2Mb) streams as possible, so they conducted tests where they reduced the bit rate of their phone circuits to find the minimum sampling rate where communications and ineligibility were maintained at just over 97% of the spoken word. Huge teams of people chatting for hours over low bit rate circuits and giving it a rating. Apparently a 16k data rate gives that 97%, so that means on a 64k data circuit that formerly was the international standard for one phone line they could cram 4 phone lines, or have 3 phone lines and have spare signalling capacity on a 56k data circuit feeding a T1 trunk. Their E1 trunks jumped from 32 callers up to fully served 128 clients, and the clients for the most part wouldn't know the difference. I've listened extensively to these 16k audio circuits and for a phone line of 300 to 3kHz range the ineligibility is still pretty darn good. However you'd be gob-smacked to know who else consistently now uses this bit rate trick for other purposes in today’s world and gets away with it, and mostly only to save money.

    For me 44.1k is awesome and works just fine. Essentially flat 20Hz to 20kHz response, +90dB S/N... nil pops or crackles.... Awesome.!! .... Get rid of all those coupling caps and DC couple from D/A to speaker is the only way it can get better...

Page 5 of 5 FirstFirst ... 3 4 5

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •