Vienna Symphonic Library Forum
Forum Statistics

182,941 users have contributed to 42,266 threads and 254,953 posts.

In the past 24 hours, we have 5 new thread(s), 9 new post(s) and 47 new user(s).

  • Hi Angelo,

    First I want to thank you for posting this information.
    It is so important and yet it is really hard to find good information about it.

    I do have 2 questions:
    - Where and when do you do the panning?
    if the panning is not done before calibration, then we would get a wrong setting, wouldn't we?

    - Same for reverbs. If reverb settings aren't done before calibration, we would get another wrong setting (especially front/back correlation).

    When I'm talking about reverberation, it is a reverb to simulate the acoustic space of the orchestra, not the one we would add on a final mix.

    When I'm talking about panning, I'm talking about emulatin a stereo image of the orchestra and not creative panning that would be done at the mixing stage.

    Any thoughts?

    Thanks again for your time,
    Vincent

  • last edited
    last edited
    Vincent...

    The changes of energy in the stereo field introduced with panorama, and the energy a reverberator adds, are not part of the calibration. The calibration is done by feeding a -20 dB RMS pink noise signal to the loudspeakers, and then adjusting the loudness with the volume knob to 85 dB SPL at the mixers ear position.

    The proposed chords a composer can make, is only an additional idea to get in touch with the dynamics of orchestral material, especially when produced with a sample library.

    @vinco said:


    - Where and when do you do the panning?
    if the panning is not done before calibration, then we would get a wrong setting, wouldn't we?

    When I'm talking about panning, I'm talking about emulatin a stereo image of the orchestra and not creative panning that would be done at the mixing stage.


    I do not apply panorama to the individual instruments while composing. Also the virtual faders are all at the same positions while composing, this gives me accurate information what loudness the streamed samples have. The musical dynamics are made with MIDI velocity.

    When preparing the track for the mix, I bounce all instruments to seperate tracks, dry and without applying panorama.

    .

  • last edited
    last edited
    Angelo,

    Thanks again for your comments.

    There is still something I don't understand.

    @Angelo Clematide said:



    I do not apply panorama to the individual instruments while composing. Also the virtual faders are all at the same positions while composing, this gives me accurate information what loudness the streamed samples have.

    The excel the mixer get from me, includes panorama information, i.e.: vln I pan 10:00, vln II pan 11:00, vla pan 12:30, vlc 14:00, cb 16:00, horns 1-8 pan 13:00-15:00. This info is only to give the mixer an approx. stereo field, so the register spread nicely in the stereo field, but it is left to the mixer to make the exact panorama, as well the 3-d space.

    Making a mix solely in the computer, with plugin processing, virtual reverb busses etc., this is another setup and workflow all together.


    On the first paragraph, you're saying that you do no apply pan settings while composing. But then, you said that you'd send some pan information to your mixer.

    Also, you said in one of your previous post that you do the automation moves on the Midi parts.

    - how do you get the right balance of the instruments and automation if your instruments are not panned while automating in midi?

    I might be wrong but as far as I know, you're losing some signal when you're panning (or you're gaining some by putting a signal in the center spot).

    How is it possible to get the right balance?
    Or maybe I just didn't understand what you meant, sorry [:(]

    I do mix in the box but in 2 steps, kind of the same way you described.
    - I first do my composition and automate the midi parts.
    - I then bounce all my parts in audio and do the audio automation/mix.

    But I'd love to use your info to have a nice calibrated template session.
    Something to start with that I can rely on.

    Also you said that mixing in the box would require another setup/workflow.

    What is it that would be different?
    Could you give me an example?

    My current workflow is as follow:
    - one station for the Midi running the VSL stuff and other instruments (fully automated in Midi)
    - one Protools station getting the Midi stuff with Adat Lightpipe used for mixing/editing the audio.


    I cannot thank you enough for your time!

    Vincent

  • last edited
    last edited
    Let's talk about working in the box only.

    @Another User said:


    Also, you said in one of your previous post that you do the automation moves on the Midi parts.


    In none of my software or hardware I use MIDI automation. The MIDI data in use is velocity, maybe a controller for vibrato, pitchbend, and fader automation for crescendo and decrescendo's...

    .

  • last edited
    last edited

    @vinco said:


    how do you get the right balance of the instruments and automation if your instruments are not panned while automating in midi?


    The most common way to spread phantom sources in the stereo field are:

    a) Each stereo track is on two faders, one fader for the left channel, and one fader for the right channel. Each of this two single channel fader has one pan knob. This gives all options as when mixing with a console.

    b) Interaural intensity differences (IID), or better called interaural level differences (ILD), are differences in the intensity of sound arriving at the two ears, and are important cues to localise sounds.

    c) Interaural Time differences (ITD), are the main cues for localizing the azimuthal position of sounds. For example: even when the left channel is far louder then the right channel, but the left channel is delayed by a few macroseconds against the right channel, then we localize the sound comming from the right, because the sound from the right channel arrives a few macroseconds earlier at our ear.

    .

  • last edited
    last edited

    @Another User said:


    But I'd love to use your info to have a nice calibrated template session.
    Something to start with that I can rely on.


    I rather think using a template doesn't save much time. I never had twice the same instrumentation.

    .

  • last edited
    last edited
    Angelo,

    I'm not an English-native speaker so sometimes the words I'm using aren't right.

    When I'm automating Midi tracks, I use a combination of velocity, volume (CC7) modulation (CC1) and the fader automation on the aux return of each instrument.

    So, if I understand correctly, while composing, all your instruments are fully panned in the total stereo (100% left - 100% right)

    When I said balance, I meant the loudness of the different instruments.
    I always believed (please correct me if I'm wrong) that an instrument being centered in the pan would sound louder than the same instrument being panned hard-left (for the same gain on the fader).

    This is why i'm asking this question.

    If your instruments are not panned appropriately when you're composing, then all your fader moves/automation would need recalibration when you're at the point of panning the instruments. (either panning in Midi or in Protools on the Aux Return)

    @Angelo Clematide said:


    The biggest difference when mixing with a console is, that the best hardware outgear processors are used, instead of plugins.


    Of course but I thought you meant the calibration setup/workflow would be different if it's done with a external console.

    Vincent

  • Vincent,

    two questions:

    1) how many fader do you have per stereo track in your virtual mixer, one stereo fader or two single channel fader?

    2) do you know what "pan laws" are?

    Here a good explanation what pan law is:
    http://logicquicktips.blogspot.com/2006/10/laws-of-pan.html

    .

  • Until now I always used stereo faders but it is possible to use two mono faders if needed (Logic Pro).

    I'm actually considering using the stereo aux in Protools as reference (instead of using the output of the virtual instrument on each track) for panning.

    Why?

    Vincent

  • last edited
    last edited

    @vinco said:


    Until now I always used stereo faders...

    Why?



    Deactivate the Universal modus in Logic, then you are flexibale to chose whatever fader type you like. The numbers of faders and pan knobs double, and you are in a professional console like environment.

    Faders on a console are always single channel faders, there is no such thing as stereo faders in pro audio.

    Think about the consquences having two faders for each stereo track, and each of this two faders has also a pan knob. You not only can pan each side of the stereo seperately, but also change the level of each side independently, you can overgo the pan laws, even before pan laws where introduced in Logic. Further you can narrow the stereo field, as well several other techniques can be used now to place a source in the stereo field.

    .

  • That's a great tip you just gave.

    I only experimented with the stereo panning PT has but I never thought of using 2 separate faders to get more flexibility.

    So that's something you would do at the mixing stage, right? Not when you're composing?

    Vincent

  • last edited
    last edited

    @Another User said:

    Not when you're composing?

    When I compose, I need all time and concentration for getting the idea into the box as fast as possible.

    .

  • last edited
    last edited

    @Angelo Clematide said:

    That's a great tip you just gave.
    That's not a tip, that's how we all work eversince.


    Thanks for the training then since I didn't have that knowledge [:)]

    Thanks again for being so patient.

    Once you understand it, it feels obvious though!

    Vincent

  • Why are you quoting my post? Your last notions don't relate to what I've written whatsover... [*-)]

  • Hi Angelo,

    there are three more questions, I have:

    1. When U describe

    "III. The measured loudness of an orchestra recording

    The decibel values are from a recording who has dynamics from ppp (as soft as possible) to fff (as loud as possible):

    ff = -12.5 dB RMS (as loud as possible)*
    mf = -18.0 dB RMS (normal loudness, not loud)
    mp = -20.0 dB RMS (normal loudness, not soft)
    pp = -27.0 dB RMS (soft)
    ppp = -48.0 dBFS Peak (as soft as possible)"

    These values are before setting your track faders and sample patches to -6dB, right?

    2. Since using reverb and track-positioning, my instruments change their original volume. So this is why it would be helpfull to know the dynamic range of e.g. the VI-16 after setting the volume faders and inserting reverb and other plugins.

    To avoid an answer like "It depens on the song...", how are the dynamic ranges of your instruments (ariving at master output) in the Sunrise_master?

    3. Why do I change the single tracks an not the sum-output by 6 dB?

  • last edited
    last edited

    @Felix Bartelt said:

    1. When U describe

    "III. The measured loudness of an orchestra recording

    The decibel values are from a recording who has dynamics from ppp (as soft as possible) to fff (as loud as possible):

    ff = -12.5 dB RMS (as loud as possible)
    mf = -18.0 dB RMS (normal loudness, not loud)
    mp = -20.0 dB RMS (normal loudness, not soft)
    pp = -27.0 dB RMS (soft)
    ppp = -48.0 dBFS Peak (as soft as possible)

    These values are before setting your track faders and sample patches to -6dB, right?


    No!

    The values in the loudness table are the circa minimum to maximum dynamic. This values are as measured on a finished stereo master who is mixed with wide range dynamics. Most orchestral classical recordings have an approximately min/max dynamic range as in this loudness table.

    .

  • last edited
    last edited

    @Felix Bartelt said:

    2. Since using reverb and track-positioning, my instruments change their original volume.


    I don't use any additional 3D positioning tool. I checked them once and didn’t like the sound.

    I make all the 3D positioning and phantom source in the stereo field "by hand" as in the old days, and without any additional plugin.

    .

  • last edited
    last edited

    @Felix Bartelt said:

    So this is why it would be helpfull to know the dynamic range of e.g. the VI-16 after setting the volume faders and inserting reverb and other plugins.


    To measure every single instrument seperately, that would to be too much work, work who would not produce meaningful and helpful results for the mixer.

    The most basic and important thing for the mixer are:

    a) that the balance of all instrument track combine to a good stereo master.
    b) that at the master stereo fader no artefacts are produced during printing a mix, i.e. dB 0+ values or severe clipping.
    c) The stereo master has the required/wanted musical dynamics.

    This may sound a little dry, and as if mixing is a boring process, but everybody who mixes knows that it is an extension of composing, quasi the last arranging.

    In my case, the wanted dynamics are always “wide range,” similar to the dynamics a recording made in a concert hall or studio has. Whenever a stereo master has to be volume maximized, i.e. for tv-spots, then we do that in a second work process, this simply because we want to own a wide range dynamic master, otherwise we could process the additional maximizing plugin chain right on the stereo master fader and print the mix mastered.

    My proposed system for mixing a wide range stereo master is nothing new, but a description of what every mixing engineer work consist of since decade.

    .

  • last edited
    last edited

    @Felix Bartelth said:

    ...", how are the dynamic ranges of your instruments (ariving at master output) in the Sunrise_master?


    The "Sunrise" stereo master is the reference track after which the loudness table in question was made. The loudness values in the table accurately represent what the loudnesses are at the stereo master fader while printing the mix.

    I chosed this track, because the overall dynamics can be easely measured and also visually be followed on the meter and wave editor, this since the recording has a gradually raising loudness from the beginning to the end.

    .