Vienna Symphonic Library Forum
Forum Statistics

193,960 users have contributed to 42,904 threads and 257,885 posts.

In the past 24 hours, we have 4 new thread(s), 16 new post(s) and 82 new user(s).

  • Might someone clarify for me the tech specs on 16 bit relative to commercial CD's? Am I correct that a standard audio CD plays at 16 bit, 44.1? What happens when you burn a 24 bit aif to a commercial redbook CD? Does it dither automatically to 16 bits? And what happens when an older CD player from the 90's is given a 24 bit CD (if such a thing is possible)?

  • 24 bit allow much more precise work on the sound, reverb, effects.
    even if you don't notice, it's good to have it…now

    [:)]

  • Thanks for your replies:

    I understand that 24 bit is better in terms of specs. I just wanted to know if people could really hear the difference.

    Plowman, as far as I know, your specs are correct for Redbook standard CDs. I believe there are many people who prefer to work the way Laurent described, using the highest bit depth and sample rate possible riight up untill the final mix before mastering. My understanding is that this way you get the best overall quality before your mix. I've read that in orchestral music this is even more critical than in other genres in order to get the most realistic orchestra sound at the end of the whole process. I haven't enough experience to know this other than through reading

    Also, I believe there are some non-commercial applications which might use these higher specs.

    Be well,

    Poppa

  • Plowman, my understanding of the Redbook standard is that it only allows 16bit/44.1kHz - that would mean you can't even burn a CD according to the Redbook CDDA standard. And if you couldforce it I'm almost sure a CD player will refuse to play it. Higher resolutions are reserved for SACD/DVD-A.

  • It depend on your ears and your experience as to whether you can actually hear the difference between a 16 bit and 24 bit sample.... but that's not the point; as soon as your sample hits the digital mixing engine in your chosen DAW application those extra bits give you all sorts of advantages.

    Any application that burns CD's (Toast etc) will simply truncate or dither a 24bit file to 16bit automatically. (these days I'm favouring simple truncation to the various dithering algorithms).

    Vince

  • last edited
    last edited

    @vinney57 said:

    It depend on your ears and your experience as to whether you can actually hear the difference between a 16 bit and 24 bit sample.... but that's not the point; as soon as your sample hits the digital mixing engine in your chosen DAW application those extra bits give you all sorts of advantages.

    Any application that burns CD's (Toast etc) will simply truncate or dither a 24bit file to 16bit automatically. (these days I'm favouring simple truncation to the various dithering algorithms).

    Vince
    Hi Vince... could you please elaborate on what kind of advantages the extra bits give?

  • Christian - I could be all wet on this but perhaps once the files get into the HOST the added bit depth will give a bit (pun intended) more headroom before distortion.


    Rob

  • Keeping it 24bit from recording all the way to the final mix provides headroom, keeping the noise floor a long way from your vital upper bits (so to speak). In the 16 bit days it was always a struggle in recording and mixing to keep the levels up there whilst providing dynamic range and absolutely NOT going over 0dBfs. With 24 bit you can relax and give yourself 8 - 12dB of headroom and not worry about noise and quantisation errors down in the dark lower bits becoming audible. The mix engines on DAWs and digital mixers are 32 bit float or 64 bit or 80 bit and so are capable of handling large dynamics and mixing them together.

    I have a feeling that once you are mixing 60 odd tracks of orchestra the benefits of 24 bit files will become obvious.

  • Probably the best/easiest way to hear the differende is to take something 8 bit at 32.000 and convert it to 16 at 48.000. Sounds awful.

    My op is that it is always better to convert something down than up, or to work at the end-specs. But I'm very happy with the extra bits!

  • I did a comparison recording where an acoustic guitar was miked in stereo and the signal fed to a 16 bit ADAT and a 20 bit ADAT. Upon playback, the stereo image actually moved when comparing one to the other.

  • last edited
    last edited

    @weslldeckers said:

    Probably the best/easiest way to hear the differende is to take something 8 bit at 32.000 and convert it to 16 at 48.000. Sounds awful.


    Um.... I don't see how that's relevant. Of course it sounds awful.