Vienna Symphonic Library Forum
Forum Statistics

182,985 users have contributed to 42,271 threads and 254,966 posts.

In the past 24 hours, we have 7 new thread(s), 15 new post(s) and 49 new user(s).

  • Felix...

    We we can change headroom withhin a few seconds, this by simply pulling all the faders to a desired value. And this without loosing your reference gain stages, and calibrated monitoring.

    I often chose a headroom of -12.0 dB for pop music. or 16dB for broadcast sound. Or when an additional, later added instrument clips the master, then I just pull all faders back by 0.1 dB or whatever value necessary, this will avoid that the stereo master clips again throughout the whole track.

    You can change the headroom at any time during production, this in both direction. Enlarge the headroom in case the headroom is used up, or reduce the headroom in case he too big.

    .

  • VII. Wide Range Dynamic Test File

    This is recording of an orchestra work, and the dynamic range of nearly 50 dB. The composition is called "Sunrise," and can be downladed, see link in the next post.

    The exact dynamic range of “Sunrise” is:

    -49.26 dB RMS at start of the audio file
    -43.71 dB RMS at ppp timpani roll
    -56.32 dB RMS minimum when music present
    -01.14 dB Peak maximum loudness at the final

    This composition is ideal to demonstrate how wide the dynamics can be in a orchestra recording, but it is not the widest dynamic range ever recorded to CD.

    The first half of the five minute are around -44.0 dB RMS to -34.0 dB RMS. Notew,, a sample patch from VSL can reproduce 29.5 dB RMS of dynamics. The second half continually increases the loudness to the final climax.

    The overall dynamics from the beginning to the end of this work has some similarities to Ravel's "Bolero," soft at the beginning and getting continously louder towards the final.

    .

  • "Sunrise"

    Sunrise_master_5mn25s_44.1k.wav

    Download Link:
    http://vsl.co.at/upload/users/57/Sunrise_master_5mn25s_44.1k.wav

    .

  • "Sunrise_master_5mn25s_44.1k.wav"

    Please give the file a critical listening before blatantly, bombastic blathering.

    .

  • .... and please go to the other reasonable thread criticizing this psychotic one before your mind has been controlled by the People of the Holy Template.

  • If anyone is interested in making VSL sound like a virtual orchestra as ordained by the priesthood of the Knights Template, please renounce all other musicianship and artistic paganism as you are baptized into the promise of eternal creativity.

    Sorry, couldn't resist [[;)]]

    Clark

  • Ah, yes, the impetuosity of youth. I remember my University students as well. I wouldn't trade those days for anything, though.

    I'm glad you don't mind me playing the devil's advocate; aside from my own personal viewpoint born of my own experience and the experience of many other professionals, this keeps us all from being indifferent! Healthier than the alternative.

    So "I'll say Tomato..."


    Clark

  • last edited
    last edited

    @mathis said:


    In my template I was more concerned about the relative ff levels. So trumpets double loudness than horns. horns double loudness than strings and woodwinds. But I didn't set these levels by numbers but by ear.
    On the other hand, since all instruments can play more or less equally soft the midi programming doesn't translate automatically between the instruments. I think about applying input filters to the individual instruments so instrument programming can be moved around freely without adaptation.
    Angelo, you don't mention relative levels. How did you set these up?


    Yes, this is the big question for me, too. I'm mostly concerned with relative levels - basically, in the sense of making 4 Horns fff distinctly mask out 1 solo violin fff...

    Great work, though, Angelo! Thanks for sharing.

    J.


    I haven't read this entire post yet, so somebody might have already addressed this, but I hope this will help:

    First, turning line volume up by 3db will double the signal's level, however the percieved loudness will not go up that much. This is due to the fact that volume is logarithmic and not linear. It would follow, then, that taking the line down 3db will halve the signal, but will not sound half as loud. So.. to double the percieved loudness of an instrument, you're going to have to increase the line volume by 10 phons (10db @ 1000Hz). Another way of thinking about it is that it would take 10 violins to double the loudness of just one.

    This is why there is a need to create a standard template and gainstage everything correctly from the beginning, because you are going to need a good amount of headroom to give the perception of a realistic dynamic range.

  • last edited
    last edited
    (... moved over from the "Critics"-Thread for better logical coherency of the two threads - /Dietz)

    @Angelo Clematide said:

    Okay experts, back to the topic of dynamics in digital recording.

    Digital signal processing need a lot of bits, but the signal itself has a limited dynamic range. Given that each bit is about 6dB of dynamics, a real 24 bits would mean 144dB dynamic range. But there is no such a thing as 144dB dynamic range in a 24-bit recording. The best analog circuits can not be that sober, not even to mention AD and DA converting yet.

    A true 20 bit un-weighted is fabulous, and a 21-bit is state of the art. The bottleneck for noise is the microphone and the input stage of a microphone pre-amp. The lower few bits of a 24 bit digital audio are just bouncing up and down between 0’s and 1’s in a random fashion.

    The available dynamic range for various gain settings are:

    122 dB dynamic range at 21 dB micpre gain
    111 dB dynamic range at 40 dB gain setting
    91 dB dynamic range at 60 dB gain setting


    The above numbers are state of the art numbers. What it says is importent to note:

    You have 20+ bits noise floor at micpre gain of 21 dB
    You have 18+ bits noise floor at micpre gain of 40 dB
    You have 15+ bits noise floor at micpre gain of 60 dB


    ... and all that before we even start talking fingers on a string.

    .

    So, what do you think is the available dynamic range when we compose and produce with the VSL library?

    .

    /Dietz - Vienna Symphonic Library
  • last edited
    last edited

    @audiocure said:


    First, turning line volume up by 3db will double the signal's level, however the percieved loudness will not go up that much. This is due to the fact that volume is logarithmic and not linear. It would follow, then, that taking the line down 3db will halve the signal, but will not sound half as loud.


    Just to quickly correct that: 3dB doubles the power in acoustic, but 6dB doubles the electronic level of a signal. So replace your 3dB with 6dB.
    Logarithmic volume..... yes, that is true and the unit Decibel already covers that. But that we don't percieve a doubled level of the electronic signal as doubled Loudness is again another biest and not simply explained with linear/logarithmic.

  • Hi Angelo,

    First I want to thank you for posting this information.
    It is so important and yet it is really hard to find good information about it.

    I do have 2 questions:
    - Where and when do you do the panning?
    if the panning is not done before calibration, then we would get a wrong setting, wouldn't we?

    - Same for reverbs. If reverb settings aren't done before calibration, we would get another wrong setting (especially front/back correlation).

    When I'm talking about reverberation, it is a reverb to simulate the acoustic space of the orchestra, not the one we would add on a final mix.

    When I'm talking about panning, I'm talking about emulatin a stereo image of the orchestra and not creative panning that would be done at the mixing stage.

    Any thoughts?

    Thanks again for your time,
    Vincent

  • last edited
    last edited
    Vincent...

    The changes of energy in the stereo field introduced with panorama, and the energy a reverberator adds, are not part of the calibration. The calibration is done by feeding a -20 dB RMS pink noise signal to the loudspeakers, and then adjusting the loudness with the volume knob to 85 dB SPL at the mixers ear position.

    The proposed chords a composer can make, is only an additional idea to get in touch with the dynamics of orchestral material, especially when produced with a sample library.

    @vinco said:


    - Where and when do you do the panning?
    if the panning is not done before calibration, then we would get a wrong setting, wouldn't we?

    When I'm talking about panning, I'm talking about emulatin a stereo image of the orchestra and not creative panning that would be done at the mixing stage.


    I do not apply panorama to the individual instruments while composing. Also the virtual faders are all at the same positions while composing, this gives me accurate information what loudness the streamed samples have. The musical dynamics are made with MIDI velocity.

    When preparing the track for the mix, I bounce all instruments to seperate tracks, dry and without applying panorama.

    .

  • last edited
    last edited
    Angelo,

    Thanks again for your comments.

    There is still something I don't understand.

    @Angelo Clematide said:



    I do not apply panorama to the individual instruments while composing. Also the virtual faders are all at the same positions while composing, this gives me accurate information what loudness the streamed samples have.

    The excel the mixer get from me, includes panorama information, i.e.: vln I pan 10:00, vln II pan 11:00, vla pan 12:30, vlc 14:00, cb 16:00, horns 1-8 pan 13:00-15:00. This info is only to give the mixer an approx. stereo field, so the register spread nicely in the stereo field, but it is left to the mixer to make the exact panorama, as well the 3-d space.

    Making a mix solely in the computer, with plugin processing, virtual reverb busses etc., this is another setup and workflow all together.


    On the first paragraph, you're saying that you do no apply pan settings while composing. But then, you said that you'd send some pan information to your mixer.

    Also, you said in one of your previous post that you do the automation moves on the Midi parts.

    - how do you get the right balance of the instruments and automation if your instruments are not panned while automating in midi?

    I might be wrong but as far as I know, you're losing some signal when you're panning (or you're gaining some by putting a signal in the center spot).

    How is it possible to get the right balance?
    Or maybe I just didn't understand what you meant, sorry [:(]

    I do mix in the box but in 2 steps, kind of the same way you described.
    - I first do my composition and automate the midi parts.
    - I then bounce all my parts in audio and do the audio automation/mix.

    But I'd love to use your info to have a nice calibrated template session.
    Something to start with that I can rely on.

    Also you said that mixing in the box would require another setup/workflow.

    What is it that would be different?
    Could you give me an example?

    My current workflow is as follow:
    - one station for the Midi running the VSL stuff and other instruments (fully automated in Midi)
    - one Protools station getting the Midi stuff with Adat Lightpipe used for mixing/editing the audio.


    I cannot thank you enough for your time!

    Vincent

  • last edited
    last edited
    Let's talk about working in the box only.

    @Another User said:


    Also, you said in one of your previous post that you do the automation moves on the Midi parts.


    In none of my software or hardware I use MIDI automation. The MIDI data in use is velocity, maybe a controller for vibrato, pitchbend, and fader automation for crescendo and decrescendo's...

    .

  • last edited
    last edited

    @vinco said:


    how do you get the right balance of the instruments and automation if your instruments are not panned while automating in midi?


    The most common way to spread phantom sources in the stereo field are:

    a) Each stereo track is on two faders, one fader for the left channel, and one fader for the right channel. Each of this two single channel fader has one pan knob. This gives all options as when mixing with a console.

    b) Interaural intensity differences (IID), or better called interaural level differences (ILD), are differences in the intensity of sound arriving at the two ears, and are important cues to localise sounds.

    c) Interaural Time differences (ITD), are the main cues for localizing the azimuthal position of sounds. For example: even when the left channel is far louder then the right channel, but the left channel is delayed by a few macroseconds against the right channel, then we localize the sound comming from the right, because the sound from the right channel arrives a few macroseconds earlier at our ear.

    .

  • last edited
    last edited

    @Another User said:


    But I'd love to use your info to have a nice calibrated template session.
    Something to start with that I can rely on.


    I rather think using a template doesn't save much time. I never had twice the same instrumentation.

    .

  • last edited
    last edited
    Angelo,

    I'm not an English-native speaker so sometimes the words I'm using aren't right.

    When I'm automating Midi tracks, I use a combination of velocity, volume (CC7) modulation (CC1) and the fader automation on the aux return of each instrument.

    So, if I understand correctly, while composing, all your instruments are fully panned in the total stereo (100% left - 100% right)

    When I said balance, I meant the loudness of the different instruments.
    I always believed (please correct me if I'm wrong) that an instrument being centered in the pan would sound louder than the same instrument being panned hard-left (for the same gain on the fader).

    This is why i'm asking this question.

    If your instruments are not panned appropriately when you're composing, then all your fader moves/automation would need recalibration when you're at the point of panning the instruments. (either panning in Midi or in Protools on the Aux Return)

    @Angelo Clematide said:


    The biggest difference when mixing with a console is, that the best hardware outgear processors are used, instead of plugins.


    Of course but I thought you meant the calibration setup/workflow would be different if it's done with a external console.

    Vincent

  • Vincent,

    two questions:

    1) how many fader do you have per stereo track in your virtual mixer, one stereo fader or two single channel fader?

    2) do you know what "pan laws" are?

    Here a good explanation what pan law is:
    http://logicquicktips.blogspot.com/2006/10/laws-of-pan.html

    .