Vienna Symphonic Library Forum
Forum Statistics

179,007 users have contributed to 42,071 threads and 253,969 posts.

In the past 24 hours, we have 3 new thread(s), 22 new post(s) and 69 new user(s).

  • individual track vs. premix reverb

    Does anyone have any knowledge/opinion on this -

    Is it better to apply reverb - whether convolution or hardware - to an individual instrument rather than that same instrument mixed dry with others? In other words, for example if you have 12 separate instruments,  rather than doing a dry premix of three groups of 4 instruments each, and rendering those with 3 reverbs, can you obtain a more accurate or somehow better sound by having each instrument solo, and rendering it with reverb, then mixing those reverbed tracks?  It seems to me that this might be a more accurate image, that corresponds to the individual instrument sounding and reverberating within a concert hall, than having a mass of dry sound composed of several instruments that has one reverb.  Though maybe it makes absolutely no audible difference.  I thought I heard a difference on an experiment but may have been hallucinating.  I wonder though if there is more accurate resolution in the reverb if each instrument has its own, even if they are the same reverb.

  • ?

  • Hi William,

    I do have an opinion on this, however, I didn't reply to your thread initially because I didn't think my opinion would be that useful to you as you are a much more experienced VSL user and forum member than me. [:)]

    Anyway, do you have some experiment files that one could make an A/B comparison on?  I would think that using an individual reverb instance on each instrument could make each individual instrument sound great on its own, but because digital audio is amplitude cumulative my guess is that the whole ensemble would sound too muddy playing together.

    Instead of using multiple instances of a single impulse response reverb (tweaked for each instrument) maybe what you really want is a single instance of a multiple impulse response reverb (i.e. MIR). [:)]


  • No, I am not an expert with Altiverb at all. But I did use individual track convolution for each and every instrument once (about 16 instances).  I rendered each track separately with reverb, and only then mixed them. It sounded startingly better than one where I used a more normal premix (strings, woodwinds, brass, and percussion totalling four instances).  It didn't sound muddy, but sounded much more real and full.  I reasoned that this is because when you hear an orchestra live, you are actually hearing each of the those instruments producing their own reverb separately, and then that is "mixed" in your ears as you sit in the building.  So it is analogous to the individual track convolution approach.  You are NOT hearing a dry "wall of sound" (a premix) that then reverberates. So I guessed that this method might be superior to the usual dry premix approach.

    I can see that nobody else here does this.  Perhaps because it takes longer to do than the usual premix approach which I admit is much more convenient.

    I have been too lazy to do that A/B comparison, but will try to and post something.

  • Hello William,

    to get some depth I use 2 - 4 reverbs with the same room or hall, each channel delayed by a different amount corresponding with the distance between the instrument groups and the listener. But I never had the idea to use a seperate convolution reverb (not altiverb, but Wizoo W2) only for one section or one instrument. It seems to be an interesting idea that is worth an attempt.



  • This is something I have been thinking about for quite some time. However, what I've come to realise during testing several possible configurations is that adequate positioning of instruments (have tested it with panning, several plug-ins and position placement within dry reverb) is more important to overall sound integrity than the method of applying reverb. In theory the advantages are somewhat more apparent, but in practice I've not yet found audible results that would convince me to sacrifice processing power to achieve better quality material.

  • To be specific about this idea, the steps would be :

    1) Freeze each individual instrument (already panned and imaged correctly in VE) to dry audio

    2) Combine all tracks in a DAW and make any corrections in balance still in the dry stage

    3) Select an appropriate Altiverb and render a new audio file for each track separately

    4) Combine these reverbed tracks and render the final mix with no additional processing

  • My new template for mixing uses dry tracks which are recorded only with panning and FX information .. Reverb is applied later. I think I prefer this method now because I'm using a temporary reverb to make the music with .. but then using a much better room profile later on which is more accurate to the real room. So I would say this method can work .. but only if your able to apply the reverb in a correct way later .. it's kinda easy to screw it up and have a dry sound with a reverb tail .. which doesn't sound good. . If you're using pre-fade reverb application (meaning where you use a bus send and use the pre-fade option on the instrument track, then you can get very good results this way. Reason I ended up multi-tracking and using an audio mixing template rather than just on the midi / VSL tracks is because I'm now working on a laptop for the duration .. so I had to compromise on how I mixed. . So yeah .. this is all essentially what you've said. I hope to get my new TODD-AO (I know I keep saying this) template done soon .. it's still not 100% ready .. but you can give it a try and see if this works for you seeing as it's nearly the system your talking about using here.

  • That's interesting Hetoreyn - I agree that Todd-AO is maybe the best general reverb for orchestral in Altiverb and also it is the only one that has variations in the impulse source location with the mike source the same position. That is how all the convolutions should be recorded for orchestral use, because you are essentially trying to emulate a fixed listening point with different sound positions - not the REVERSE which most of the impulses in their set have. 

  • I had to edit this as I did not mean it so definitively, but it does seem that anyone using Altiverb with different convolutions in all the supplied impulses except for that particular Todd-AO is somewhat distorting the sound by using the convolutions with different microphone positions.  And then changing the relationships further by adujsting dry/wet balance or the artificial speaker positioning as a workaround for the fact you cannot use all pure wet convolutions recorded from the same location but with different sound source positions.. I believe this is the idea behind MIR (?) and hopefully it will solve this problem which is very characteristic of orchestral sample performance.


  • Yes you're correct that the listening position is the podium in the TODD AO hall. Although I am using 6 AV's to simulate the 3 room recording positions. And I'm using a Near and Far Reverb for each of the 3 layers. So in essence you're not hearing everything from just one depth layer. And you can change the perceived depth between the layers somewhat by altering the amount of wet / dry signal. What can I say .. if this method doesn't work for you .. then it doesn't work. I think it's quite effective myself and as soon as I get the template and all the stuff on my site you can decide for yourself. I'm sure it won't suit everyones tastes or needs, but it's yet another option for mixing. . I myself am awaiting MIR with baited breath, as I think this will solve a LOT of problems for virtual room recording. Can only say that my TODD AO room template is the closest approximation of the real room profile that I can make and that the sound is quite pleasing to me. I'm trying to get all my stuff together to make the next podcast .. in which I'll explain the operation of the Template and how I arrived at it. Then I can post the stuff on my website and everyone with Logic 8 and Altiverb 6 can try it out.

  •  No, that method works for me just as well.  I probably overstated it as I often do - I didn't mean that wet/dry doesn't work. You HAVE to use it, because you cannot use pure convolution. That is what I meant -that you SHOULD be able to use 100% wet convolution with different sound source positions. This would be a pure way of doing it, as opposed to the compromise of wet/dry ratios which are totally artificial. In that real environment there is no such thing as "dry" signal.  Also,in Altiverb, the use of the "speaker placement" stage position adjustment is also artificial as it is doctoring the original pure sound impulse.

  • Lately I've been using an EQ--Plat Verb--Dir Mixer plug chain then Sound Designer thru a bus (this is Logic with 1st Edition btw)

    This alows me to place the instrument in the stereo field with a localized verb/early reflection sort of thing.  The Plat verb has a warmer, shorter verb (1.8 ms) at 75-80% wet/dry ratio, I narrow and pan that with the Dir mixer, then send it to the main verb(s).  Also, I've been working with zero pre-delay on everything, so I can help kill that "dry" signal combined with reverb sound that William is talking about.

    Since I'm on an old machine, I usually bypass the Plat Verb while I work, occasionally unmuting it to check the part with others in its family (all brass, for instance).  At the end, I will freeze most of the tracks for final mix tweaking of the remaining parts.

    Works pretty well, and the dry, close VSL sits better with the Project Sam stuff (close or stage mic, as I send everything thru SDesigner in the end), for example.  Totally unscientific, I should say, as I am "earballing" the sound.


  • I tried a comparison once last year. I tired it on a full string section only. A) I took all my section instruments and printed them dry in position on their own stereo tracks (VI1, VI2, VA, VC and Bass) B) Then I ran each track through it's own Altiverb room (the according section impulses offered up in the orchestral score stages). C) The last step was to take the new tracks and mix all the room treated sections and blend in a overall verb with no er and tail only to a final stereo track. The end result was good but considerably more noisier. When I compared it to another mix using the dry prints mixed directly with two verbs on an aux (1st as room er and the second with hall tail only) I found that the individual track method was slightly muddier and had a hell of lot more hiss. This makes senses as you take the noise level and times it by 5 in the final mix. That noise level adds up and becomes an issue. This was prior to the Todd AO being released, so I don't know what the results would be using Todd. But I would guess that various impulses would vary in the amount of noise. In the end, I decided that any gain in the image was nulled out by the increased time process and more importantly, the hiss, which was really killing it for me. Since then I have just mixed sections in place with Todd as a room and then a final verb such as TC VSS3 using a hall or such. The results have been good and the process is fast. Only one print required. I might try it again to see if I can get better results using Todd AO. Lately I'm dealing with other issues, such as eq and room sound blending between different libs, which just complicates the matter.