Okay!
.
.
200,193 users have contributed to 43,194 threads and 259,061 posts.
In the past 24 hours, we have 1 new thread(s), 10 new post(s) and 73 new user(s).
@mathis said:
In my template I was more concerned about the relative ff levels. So trumpets double loudness than horns. horns double loudness than strings and woodwinds. But I didn't set these levels by numbers but by ear.
On the other hand, since all instruments can play more or less equally soft the midi programming doesn't translate automatically between the instruments. I think about applying input filters to the individual instruments so instrument programming can be moved around freely without adaptation.
Angelo, you don't mention relative levels. How did you set these up?
Okay experts, back to the topic of dynamics in digital recording.
Digital signal processing need a lot of bits, but the signal itself has a limited dynamic range. Given that each bit is about 6dB of dynamics, a real 24 bits would mean 144dB dynamic range. But there is no such a thing as 144dB dynamic range in a 24-bit recording. The best analog circuits can not be that sober, not even to mention AD and DA converting yet.
A true 20 bit un-weighted is fabulous, and a 21-bit is state of the art. The bottleneck for noise is the microphone and the input stage of a microphone pre-amp. The lower few bits of a 24 bit digital audio are just bouncing up and down between 0’s and 1’s in a random fashion.
The available dynamic range for various gain settings are:
122 dB dynamic range at 21 dB micpre gain
111 dB dynamic range at 40 dB gain setting
91 dB dynamic range at 60 dB gain setting
The above numbers are state of the art numbers. What it says is importent to note:
You have 20+ bits noise floor at micpre gain of 21 dB
You have 18+ bits noise floor at micpre gain of 40 dB
You have 15+ bits noise floor at micpre gain of 60 dB
... and all that before we even start talking fingers on a string.
.
So, what do you think is the available dynamic range when we compose and produce with the VSL library?
.
@audiocure said:
First, turning line volume up by 3db will double the signal's level, however the percieved loudness will not go up that much. This is due to the fact that volume is logarithmic and not linear. It would follow, then, that taking the line down 3db will halve the signal, but will not sound half as loud.
@vinco said:
- Where and when do you do the panning?
if the panning is not done before calibration, then we would get a wrong setting, wouldn't we?
When I'm talking about panning, I'm talking about emulatin a stereo image of the orchestra and not creative panning that would be done at the mixing stage.
I do not apply panorama to the individual instruments while composing. Also the virtual faders are all at the same positions while composing, this gives me accurate information what loudness the streamed samples have.
The excel the mixer get from me, includes panorama information, i.e.: vln I pan 10:00, vln II pan 11:00, vla pan 12:30, vlc 14:00, cb 16:00, horns 1-8 pan 13:00-15:00. This info is only to give the mixer an approx. stereo field, so the register spread nicely in the stereo field, but it is left to the mixer to make the exact panorama, as well the 3-d space.
Making a mix solely in the computer, with plugin processing, virtual reverb busses etc., this is another setup and workflow all together.
Also, you said in one of your previous post that you do the automation moves on the Midi parts.
@vinco said:
how do you get the right balance of the instruments and automation if your instruments are not panned while automating in midi?
But I'd love to use your info to have a nice calibrated template session.
Something to start with that I can rely on.
The biggest difference when mixing with a console is, that the best hardware outgear processors are used, instead of plugins.