Okay Dietz, good point.
.
.
192,306 users have contributed to 42,839 threads and 257,578 posts.
In the past 24 hours, we have 8 new thread(s), 41 new post(s) and 193 new user(s).
@dpcon said:
01. Set the track fader
Set all track faders with a sample playback software to -6.0 dB.
Does this mean for example if I have and aux in my DAW that is bringing in audio from an outboard cpu that I set it to -6.0? Or do I set the output faders in the outboard cpu of host Plogue Bidule to -06? Would you explain this step more and where it applies?
LetĀ“s say I have a real orchestra playing a tutti stacc. chord and this would be at 0.01 dB Peak.
If I play the same tutti with the volume setup, you proposed, it should be the same, if I got U right.
I. The total dynamic range
03. Absolute maximum loudness - staccato
Play a tutti chord where all instruments play a staccato ff sample with a velocity of 127. This will produce the maximum loudness possible. The master fader will not clip, but may raise to 0.01 dB Peak.
.
But in my ears some of the VSL patches are much too loud compared to others, which means, I have to lower their levels. If I do so, the peak should not stay at 0.01 dB.
2. But no matter, which headroom I should get, the question stays, why I donĀ“t need to balance my instruments before. If IĀ“m right, the volumes of the single patches are optimized and not all ballanced in relation to each other. This means, that I have to turn down the volume of e.g. the solo violin, cause otherwise IĀ“d get the unrealistic situation, that one single violin plays as loud as the other 16 ones.
But if I lower the volume of single instruments, donĀ“t I also lower the overall volume, so that after some balancing action my headroom gets bigger (e.g. -20 dB instead of -12 dB)?
@mathis said:
In my template I was more concerned about the relative ff levels. So trumpets double loudness than horns. horns double loudness than strings and woodwinds. But I didn't set these levels by numbers but by ear.
On the other hand, since all instruments can play more or less equally soft the midi programming doesn't translate automatically between the instruments. I think about applying input filters to the individual instruments so instrument programming can be moved around freely without adaptation.
Angelo, you don't mention relative levels. How did you set these up?
Okay experts, back to the topic of dynamics in digital recording.
Digital signal processing need a lot of bits, but the signal itself has a limited dynamic range. Given that each bit is about 6dB of dynamics, a real 24 bits would mean 144dB dynamic range. But there is no such a thing as 144dB dynamic range in a 24-bit recording. The best analog circuits can not be that sober, not even to mention AD and DA converting yet.
A true 20 bit un-weighted is fabulous, and a 21-bit is state of the art. The bottleneck for noise is the microphone and the input stage of a microphone pre-amp. The lower few bits of a 24 bit digital audio are just bouncing up and down between 0ās and 1ās in a random fashion.
The available dynamic range for various gain settings are:
122 dB dynamic range at 21 dB micpre gain
111 dB dynamic range at 40 dB gain setting
91 dB dynamic range at 60 dB gain setting
The above numbers are state of the art numbers. What it says is importent to note:
You have 20+ bits noise floor at micpre gain of 21 dB
You have 18+ bits noise floor at micpre gain of 40 dB
You have 15+ bits noise floor at micpre gain of 60 dB
... and all that before we even start talking fingers on a string.
.
So, what do you think is the available dynamic range when we compose and produce with the VSL library?
.
@audiocure said:
First, turning line volume up by 3db will double the signal's level, however the percieved loudness will not go up that much. This is due to the fact that volume is logarithmic and not linear. It would follow, then, that taking the line down 3db will halve the signal, but will not sound half as loud.