Vienna Symphonic Library Forum
Forum Statistics

193,997 users have contributed to 42,905 threads and 257,892 posts.

In the past 24 hours, we have 4 new thread(s), 17 new post(s) and 92 new user(s).

  • Beat,

    Correction:
    I was adjusting the wrong pre-delay (late predelay.) Those settings you suggest 0, 16, 34 are working pretty well now. Also cutting the lows and narrowing stereo fields of winds also is working. Digital Performer has a plugin named Trim which works very good for narrowing stereo image.

    Do you use pre-fader setting for reverb? I find that helps for back row percussion etc.

  • I wouldn't add reverbrance on the stereo master fader, or the VSTi track fader, simply because of the fact that one has to level or re-level the dry/wet balance withhin a plugin. The bus method also helps to sepperate the direct sound from the reverbrance while programming reverb. Also the FUSING of the wet signal with the direct sound can easely be achieved with reverbrance on bus. However, there are occasions where i add reverbrance on a finished mix, who already had reverb on the individual tracks, this quasi to put a final unifying reverb envelope over the mix, however, this i do also with bus .

    Further more, using a bus for reverbrance, has several other advantages, i.e. you can level, sculpture and shape the 100% reverb on it's own without affecting the dry/direct signal.

    If you don't have one of this newer reverb plugins, where the AIR ABSORPTION for further away instruments can be manipulated in a matrix field, you have to use EQ to dampen top frequencies for this more distant dimension.

    For concert hall type reverbrance, i prefer a general pre-dealay of around 125ms. Anything below 100ms is rather not perceived as reverb. However a assumed real concert hall has first reflections coming back from the walls and ceiling as early as 60ms.

    At the very beginning of Reverbrance, the ambience programing can be subdivided into two slightly different aspects:

    RUNNING REVERBERANCE
    that is the reverberance that is perceived while music is played continuously. The perception of running reverberance is governed by the early decay time, either calculated over the first 15 dB of the decay or over the first 380 ms of the decay.

    STOPPED REVERBERANCE
    is the effect that is heard at the termination of the sound and is the "classical" reverberation time.

    The stopped reverbrance is very helpful for the ear to program the artificial reverb, for example with a staccato tutti chord, or a section tutti chord. A good method is to compare real orchestra recordings with your artifical reverbrance, very useful are the final chords where you hear the stopped reverbrance.

    The next most important aspect is FUSING which is simply said, to chose the right level of the 100% reverbrance signal, so we perceive the artifical reverbrance as being real, respectively getting fused with the direct sound.

    Generally, i run 3 reverbration units on each instrument section (woodwind, brass, percussion and strings). Plus three units on each low instrument like Gran Cassa etc., and another three slightly more crystal ones for solo instruments. For pre-production mixes in the box, this three per section bus reverb plugins are: two AphroV-1 + one Lexicon Pantheon. Again, the most important thing is to fuse the reverbs, ---> even a completly "wrong" reverb can be fused for special effect. The AphroV-1 from VB's Audio Mechanic & Sound Breeder offers the predelay possibilities i prefer with three early reflections directly accessible, this alouds with two unit six predelays from 50ms to 150ms.

    The next thing is to pan the instruments to a place in the stereo field of your preference, this is done before programing the definitely reverb. That's why i made the rumour about using two faders for more flexibility. The in the box production narrows our possibilities on the VSTi faders, as well on bus effect, however if the tracks is rendered/bounced once to stereo, which i do dry, then you have all neccssary possibilities.

    Hope this is written simple enough that also people who are new to this subject understand.

    .

  • Thanks for that. I am bussing to three reverbs with different predelays. I have damped the Winds a little with EQ for air absorbtion effect. I'm having trouble making the Brass sound farther back. I have them going to same reverb as Winds I and haven't eq'd for absorbtion (so maybe that's the problem.)

  • Assumed that all instrument where recorded a equal distance in the silent stage, you would also have to dampen the direct sound. You may also consider that brass, or any instruments in the back are a tiny bit to bright in the virtual mix compared listening at a good seat in the concert hall. It's quite complex thinking required to re-create it virtually.

    The last two days i was preparing for working with VSL, and created a default project with all set up, buses, reverb presets, saving muti patches etc.

    Here my test to simulate a near real reverb artifically, it's the last chord of Bruckners Nr. 8, I allegro moderato, 1st with the original tail, 2nd with my artifical reverb: original concert hall_artifical recreation.wav
    http://www.yousendit.com/transfer.php?action=download&ufid=4A0CD5CD0C5CFABA

    Her a drawing for depth visualization and air absorption areas, thought the microphone above, and circa 2-3 feet behind the head of the conductor: air absorption_conductor position.jpg
    http://www.yousendit.com/transfer.php?action=download&ufid=A34A79BE10762D00

    Of course this is just a preperatory to get started. It is not my goal to re-create a orchestra recording, but apply a room, pan, mix etc. technique, who derives from my imagination, also i compose only original music where each piece has slightly different needs. I will tell you more how i will create this larger then life sound, also with additional vertical dimensions. When at the end nothing helps to sound better then the a real orchestra, i will write scores, and drop the virtual stuff for my earnest composing.

    .

  • It seems you guys are pretty far along with the reverb discussion, however you also asked about limiting the width of individual stereo instruments.

    One easy way is to use Waves S-1 plugin. It limits both the width & the direction of a channel. When (in Logic) I have several input channels of say, first violins, I combine them by busing them to an Aux channel (merely to have them as a single point source for effects & reverb sends. You might want to think of them as subgroups.) Here I instantiate S-1 and from this point on in the signal chain S-1 has determined the width & directionality of the instrument subgroups. Then I send the Aux channel to Bus channels for sending out my main 1-2 outputs.

    Hmm, sounds involved. But it really is quite easy and straight forward. The results are exactly what I've been looking for to eliminate that errant stereo anomoly with most VSL instruments. You know, the violin is playing in the left channel and suddenly two or three notes pop out in the right channel. Very disconcerting. Now this can be completely controlled.

    The busing of course would be a lot easier if you didn't need to subgroup channels of the same instrument. It's not as necessary now as it was prior to the VI.

    Of course you could use S-1 on reverb too.

  • last edited
    last edited

    @dpcon said:

    I'm having trouble making the Brass sound farther back. I have them going to same reverb as Winds I and haven't eq'd for absorbtion (so maybe that's the problem.)


    In creating a artificial concert hall reverb, beside the air absorbtion, other important factors to make a instrument more distant than others, i.e. moving them back in the stereo field is to delay the direct sound + the total reverbrance. Everything at larger distance to the listening position arrives later at your ear.

    Without going into all the details one can say that the localisation of a sound source between the loudspeakers, respectively a stereo field is obtained by creating 1) a time difference between them, 2) by intensity differences, 3) or by a combination of both intensity, and time difference. Intensity differences can be done in a simplified way with the pan knob for example.

    In a concert hall the speed of sound is approx. 342 meter per second at 18 degree celsius, or 1122 feet per second at 64.4 Fahrenheit. (1.13 foot = 1 millisecond).
    Example, time difference for distance: Let's say you want to move the the trumpets 15 feet behind the first front row of violas, this would mean you have to delay the trumpet and it’s reverb by 17 milliseconds compared to the violas. Here you don’t have to open a delay for the track, simply delay the trumpet midi track 17 milliseconds.

    The human sensory is capable to interpret left/right arrival differences down to 7 microseconds (7/10000/sec) to detect from which direction a sound is coming. Example one, time difference: you want the trumpets be heard coming from the right half of the stereo field, then delay the left channel by 7 microsecond, you will hear the trumpet coming from the right side even thus the left channel has exactly the same level. Example two, time difference: let’s assume the trumpet is nearer to the right wall, then the reflections from the right wall would arrive earlier at your ear drum then the reflections from left walls - you have to program this L/R reflection time differences into the reverb device.

    With this time difference parameters you place the instruments exactly where you want them in the stereo field, accurate to one foot with milliseconds, or accurate to 1/10 of a foot with devices with microsecond scaling. The intensity differences you simply make with the pan knob or ultimately with separate faders for each stereo side with two pan knobs. Drawing a little ray study of your imaginary room with it’s direct & reflected sound including the time of travel to you ear drum will help. This may sound all a bit too scientific, but that’s the way it is, at least very simplified said. Be careful, because i think the VSL stereos also have phase differences, which leads sometime to other ways to handle what i just said.

    .

  • eratum: 7 microseconds = 0.007 milliseconds (7/1'000'000)

    .

  • Angelo,

    Brilliant stuff which makes a lot of sense. I was so busy working I didn't read it thoroughly before but I appreciate the information very much.

  • Well i have a few days off, and test the whole panorama & 3D plugins, and see what, i found one "Panorama 5" who sounds very good, and opens a lot of possibilities to the vsl stereo ORTF samples not realizable with pan knobs. I.e. sound localization, HRTFs, binaural synthesis, crosstalk cancelling, and acoustic modeling in 3D.

    All you have to know for a start is that the standard stereo speaker setup is 30 degree from center per monitor. The headphone presets have less coloration, and sound also good on speaker. Elevate the flutes above the englishhorn etc..

    Please download the 30 day demo and experiment, i would really like to know what you all think of that applied to vsl orchestra compositions. Especially if you think the sonic quality is right.

    Panorama 5:
    http://wavearts.com/Panorama5.html

    .

  • this is a picture of where it should be possible to place instruments. Full 360 degree around, elevated down 40 degree and up 90 degree.

    http://www.tdt.com/Sys3WebHelp/hrtfhead.gif

    .

  • I'm learning a lot here chaps, so thanks a lot, and if possible, keep it coming.
    Angelo, i downloaded the panorama demo, and tried it. Impressed with the ease of use. Along with yours, and others, formidable knowledge on the subject in general!

    Regards,


    Alex.