Vienna Symphonic Library Forum
Forum Statistics

194,006 users have contributed to 42,905 threads and 257,892 posts.

In the past 24 hours, we have 3 new thread(s), 16 new post(s) and 96 new user(s).

  • Dietz, Beat - Reverb Question

    Gentlemen,

    In Beats tutorial on Audio it shows the audio tracks routed directly through reverb (ambiance then reveb in the diagram.)

    My question: is this normal for reverb or convolution? If I run Altiverb for example - the entire signal goes through Altiverb as opposed to bussing and then combining with dry signal as you would on an outboard mixing console?

    Right now I bus to an aux with a reverb insert ( three with different pre-delays for depth) and combine the two signals. If I make the output of audio channels direct to aux (say strings Vln I, Vln II, Vla, VC, CB) how can I control the amount of reverb on each track? I would not want the same amount on CB's as Vlns would I?

    Thanks,

    DC

  • Well if you're using it on an Aux insert surely you can control the amount of reverb wit the send and return levels. I'm not entirely sure how that works as I use 4 busses each with it's own reverb for depth .. with an overall master reverb on the output track. Not the best I admit but it works for me.

    Anyways, sincce I'm not Beat or Dietz I'll bugger off now [:P]. But I think the answer toyour quetion lies in using the send and return levels to control how much of the reverb you use on the selected channels.

  • personnaly i like to have reverb on bus with mix 100% wet and i send all track on a empty bus named "dry".
    So i can every time change the dry/wet balance just mixing the dry bus !

    i know it is not usual but i like this method

    thierry [:D]

  • Okay guys, in fact:


    EVERYONE MAY POST THEIR REVERB SETUP HERE

    As I said I have all the tracks through a dry stereo out and then bus to three different verbs on aux's with different predelays for depth.

    What are people using for Room size, Predelay, Density, or ANY parameter, and how are you narrowing the stereo field of certian instruments? (I know there are many ways and it will change for different things but maybe basic concepts that work for you.)

    Thanks and post away.

  • last edited
    last edited

    @dpcon said:

    ...My question: is this normal for reverb or convolution? If I run Altiverb for example - the entire signal goes through Altiverb as opposed to bussing and then combining with dry signal as you would on an outboard mixing console?...
    DC

    Hi
    As always it is a matter of taste which system you choose. Perhaps MIR will
    give us the final-reverb-solution one day. But for now we have to fiddle around
    to hit on one's taste.

    My current reverb system:
    I like to have different depths (3). To create them I currently use VST Gigapulse.
    Further I like the crisp of all the well recorded VSL-instruments. Therefore
    I often take the MEDIUM-Hall with GigaPulse. With this reverb you get nice depths
    and not to much reverb (mud). But nevertheless, sometimes I miss it.
    In that case I open a fourth Bus/Group with one more reverb. I often use PristinSpace for
    that and load a church- or concert- roomprint with it. Now I'm able to add reverb with
    the send-function at depth1 - 3.

    Please check out the following diagram (Reverbmix 2)
    Reverb Diagrams
    Sound Example Depths
    Gigapulse: with MediumHall
    depth1=sViolin, depth2=StringEnsemble+sCello, depth3=Organ
    PristineSpace: "Concerthall" only wet
    a little reverb added with it >more reverb for depth3 than for depth2 and 1

    Perhaps I'll change my current reverb draft when I'm an owner of Altiverb for Windows or later on MIR [8-)] .

    Beat Kaufmann

    - Tips & Tricks while using Samples of VSL.. see at: https://www.beat-kaufmann.com/vitutorials/ - Tutorial "Mixing an Orchestra": https://www.beat-kaufmann.com/mixing-an-orchestra/
  • Thanks Beat, I'm having a look at diagrams...

    Sound example is fantastic. I hear the exciter and it sounds great. I will experiment with the configuration of depth and then reverb afterward.

    I currently have your pre-delay settings: 0, 16, 34. Do you vary these to any degree?

  • last edited
    last edited

    @dpcon said:

    Thanks Beat, I'm having a look at diagrams...
    Sound example is fantastic. I hear the exciter and it sounds great. I will experiment with the configuration of depth and then reverb afterward.
    I currently have your pre-delay settings: 0, 16, 34. Do you vary these to any degree?


    Yes Dave, vary it and check the result with your monitors. The predelay is one parameter of lots to create depths. You know it is a more complex thing (damps of frequencies etc.). Another important parameter beside the predelay is the dry/wet balance.
    http://www.beat-kaufmann.com/downloads/audiodirectindirectdepth.mp3
    (from dry to wet)
    If you take a convolution reverb with its "roomprint" it also could be possible that you have waves with their own built-in-delays. So try different roomprints. Sometimes they are good for ambiance but not for reverb and vice versa.
    BUT - You own Altiverb, do you? I don't know exatly - but it should be possible to get depths by shifting around the loudspeaker-symbols.
    http://www.audioease.com/Pages/Altiverb/Altiverb5Stagepositions.html
    Perhaps you can forget the predelays this way...?


    Beat

    - Tips & Tricks while using Samples of VSL.. see at: https://www.beat-kaufmann.com/vitutorials/ - Tutorial "Mixing an Orchestra": https://www.beat-kaufmann.com/mixing-an-orchestra/
  • Don't have Altiverb yet just a plugin reverb. I'm experimenting with cutting lows with Winds and also some highs. I will experiment with pre-delay because I don't think those settings are creating enough depth just yet. Also trying to figure how to narrow stero filed of winds in Digital Performer. The surround panner seems to work but I have to figure how to apply to Winds and Brass only and not strings.

    I am continuing to read your tutorials which are excellent

  • Beat,

    Correction:
    I was adjusting the wrong pre-delay (late predelay.) Those settings you suggest 0, 16, 34 are working pretty well now. Also cutting the lows and narrowing stereo fields of winds also is working. Digital Performer has a plugin named Trim which works very good for narrowing stereo image.

    Do you use pre-fader setting for reverb? I find that helps for back row percussion etc.

  • I wouldn't add reverbrance on the stereo master fader, or the VSTi track fader, simply because of the fact that one has to level or re-level the dry/wet balance withhin a plugin. The bus method also helps to sepperate the direct sound from the reverbrance while programming reverb. Also the FUSING of the wet signal with the direct sound can easely be achieved with reverbrance on bus. However, there are occasions where i add reverbrance on a finished mix, who already had reverb on the individual tracks, this quasi to put a final unifying reverb envelope over the mix, however, this i do also with bus .

    Further more, using a bus for reverbrance, has several other advantages, i.e. you can level, sculpture and shape the 100% reverb on it's own without affecting the dry/direct signal.

    If you don't have one of this newer reverb plugins, where the AIR ABSORPTION for further away instruments can be manipulated in a matrix field, you have to use EQ to dampen top frequencies for this more distant dimension.

    For concert hall type reverbrance, i prefer a general pre-dealay of around 125ms. Anything below 100ms is rather not perceived as reverb. However a assumed real concert hall has first reflections coming back from the walls and ceiling as early as 60ms.

    At the very beginning of Reverbrance, the ambience programing can be subdivided into two slightly different aspects:

    RUNNING REVERBERANCE
    that is the reverberance that is perceived while music is played continuously. The perception of running reverberance is governed by the early decay time, either calculated over the first 15 dB of the decay or over the first 380 ms of the decay.

    STOPPED REVERBERANCE
    is the effect that is heard at the termination of the sound and is the "classical" reverberation time.

    The stopped reverbrance is very helpful for the ear to program the artificial reverb, for example with a staccato tutti chord, or a section tutti chord. A good method is to compare real orchestra recordings with your artifical reverbrance, very useful are the final chords where you hear the stopped reverbrance.

    The next most important aspect is FUSING which is simply said, to chose the right level of the 100% reverbrance signal, so we perceive the artifical reverbrance as being real, respectively getting fused with the direct sound.

    Generally, i run 3 reverbration units on each instrument section (woodwind, brass, percussion and strings). Plus three units on each low instrument like Gran Cassa etc., and another three slightly more crystal ones for solo instruments. For pre-production mixes in the box, this three per section bus reverb plugins are: two AphroV-1 + one Lexicon Pantheon. Again, the most important thing is to fuse the reverbs, ---> even a completly "wrong" reverb can be fused for special effect. The AphroV-1 from VB's Audio Mechanic & Sound Breeder offers the predelay possibilities i prefer with three early reflections directly accessible, this alouds with two unit six predelays from 50ms to 150ms.

    The next thing is to pan the instruments to a place in the stereo field of your preference, this is done before programing the definitely reverb. That's why i made the rumour about using two faders for more flexibility. The in the box production narrows our possibilities on the VSTi faders, as well on bus effect, however if the tracks is rendered/bounced once to stereo, which i do dry, then you have all neccssary possibilities.

    Hope this is written simple enough that also people who are new to this subject understand.

    .

  • Thanks for that. I am bussing to three reverbs with different predelays. I have damped the Winds a little with EQ for air absorbtion effect. I'm having trouble making the Brass sound farther back. I have them going to same reverb as Winds I and haven't eq'd for absorbtion (so maybe that's the problem.)

  • Assumed that all instrument where recorded a equal distance in the silent stage, you would also have to dampen the direct sound. You may also consider that brass, or any instruments in the back are a tiny bit to bright in the virtual mix compared listening at a good seat in the concert hall. It's quite complex thinking required to re-create it virtually.

    The last two days i was preparing for working with VSL, and created a default project with all set up, buses, reverb presets, saving muti patches etc.

    Here my test to simulate a near real reverb artifically, it's the last chord of Bruckners Nr. 8, I allegro moderato, 1st with the original tail, 2nd with my artifical reverb: original concert hall_artifical recreation.wav
    http://www.yousendit.com/transfer.php?action=download&ufid=4A0CD5CD0C5CFABA

    Her a drawing for depth visualization and air absorption areas, thought the microphone above, and circa 2-3 feet behind the head of the conductor: air absorption_conductor position.jpg
    http://www.yousendit.com/transfer.php?action=download&ufid=A34A79BE10762D00

    Of course this is just a preperatory to get started. It is not my goal to re-create a orchestra recording, but apply a room, pan, mix etc. technique, who derives from my imagination, also i compose only original music where each piece has slightly different needs. I will tell you more how i will create this larger then life sound, also with additional vertical dimensions. When at the end nothing helps to sound better then the a real orchestra, i will write scores, and drop the virtual stuff for my earnest composing.

    .

  • It seems you guys are pretty far along with the reverb discussion, however you also asked about limiting the width of individual stereo instruments.

    One easy way is to use Waves S-1 plugin. It limits both the width & the direction of a channel. When (in Logic) I have several input channels of say, first violins, I combine them by busing them to an Aux channel (merely to have them as a single point source for effects & reverb sends. You might want to think of them as subgroups.) Here I instantiate S-1 and from this point on in the signal chain S-1 has determined the width & directionality of the instrument subgroups. Then I send the Aux channel to Bus channels for sending out my main 1-2 outputs.

    Hmm, sounds involved. But it really is quite easy and straight forward. The results are exactly what I've been looking for to eliminate that errant stereo anomoly with most VSL instruments. You know, the violin is playing in the left channel and suddenly two or three notes pop out in the right channel. Very disconcerting. Now this can be completely controlled.

    The busing of course would be a lot easier if you didn't need to subgroup channels of the same instrument. It's not as necessary now as it was prior to the VI.

    Of course you could use S-1 on reverb too.

  • last edited
    last edited

    @dpcon said:

    I'm having trouble making the Brass sound farther back. I have them going to same reverb as Winds I and haven't eq'd for absorbtion (so maybe that's the problem.)


    In creating a artificial concert hall reverb, beside the air absorbtion, other important factors to make a instrument more distant than others, i.e. moving them back in the stereo field is to delay the direct sound + the total reverbrance. Everything at larger distance to the listening position arrives later at your ear.

    Without going into all the details one can say that the localisation of a sound source between the loudspeakers, respectively a stereo field is obtained by creating 1) a time difference between them, 2) by intensity differences, 3) or by a combination of both intensity, and time difference. Intensity differences can be done in a simplified way with the pan knob for example.

    In a concert hall the speed of sound is approx. 342 meter per second at 18 degree celsius, or 1122 feet per second at 64.4 Fahrenheit. (1.13 foot = 1 millisecond).
    Example, time difference for distance: Let's say you want to move the the trumpets 15 feet behind the first front row of violas, this would mean you have to delay the trumpet and it’s reverb by 17 milliseconds compared to the violas. Here you don’t have to open a delay for the track, simply delay the trumpet midi track 17 milliseconds.

    The human sensory is capable to interpret left/right arrival differences down to 7 microseconds (7/10000/sec) to detect from which direction a sound is coming. Example one, time difference: you want the trumpets be heard coming from the right half of the stereo field, then delay the left channel by 7 microsecond, you will hear the trumpet coming from the right side even thus the left channel has exactly the same level. Example two, time difference: let’s assume the trumpet is nearer to the right wall, then the reflections from the right wall would arrive earlier at your ear drum then the reflections from left walls - you have to program this L/R reflection time differences into the reverb device.

    With this time difference parameters you place the instruments exactly where you want them in the stereo field, accurate to one foot with milliseconds, or accurate to 1/10 of a foot with devices with microsecond scaling. The intensity differences you simply make with the pan knob or ultimately with separate faders for each stereo side with two pan knobs. Drawing a little ray study of your imaginary room with it’s direct & reflected sound including the time of travel to you ear drum will help. This may sound all a bit too scientific, but that’s the way it is, at least very simplified said. Be careful, because i think the VSL stereos also have phase differences, which leads sometime to other ways to handle what i just said.

    .

  • eratum: 7 microseconds = 0.007 milliseconds (7/1'000'000)

    .

  • Angelo,

    Brilliant stuff which makes a lot of sense. I was so busy working I didn't read it thoroughly before but I appreciate the information very much.

  • Well i have a few days off, and test the whole panorama & 3D plugins, and see what, i found one "Panorama 5" who sounds very good, and opens a lot of possibilities to the vsl stereo ORTF samples not realizable with pan knobs. I.e. sound localization, HRTFs, binaural synthesis, crosstalk cancelling, and acoustic modeling in 3D.

    All you have to know for a start is that the standard stereo speaker setup is 30 degree from center per monitor. The headphone presets have less coloration, and sound also good on speaker. Elevate the flutes above the englishhorn etc..

    Please download the 30 day demo and experiment, i would really like to know what you all think of that applied to vsl orchestra compositions. Especially if you think the sonic quality is right.

    Panorama 5:
    http://wavearts.com/Panorama5.html

    .

  • this is a picture of where it should be possible to place instruments. Full 360 degree around, elevated down 40 degree and up 90 degree.

    http://www.tdt.com/Sys3WebHelp/hrtfhead.gif

    .

  • I'm learning a lot here chaps, so thanks a lot, and if possible, keep it coming.
    Angelo, i downloaded the panorama demo, and tried it. Impressed with the ease of use. Along with yours, and others, formidable knowledge on the subject in general!

    Regards,


    Alex.