Vienna Symphonic Library Forum
Forum Statistics

194,142 users have contributed to 42,912 threads and 257,924 posts.

In the past 24 hours, we have 2 new thread(s), 15 new post(s) and 79 new user(s).

  • I am curious to know how people using Altiverb handle the direct signal from each instrument or group?

    Do you pan the direct signals (assuming you have a bounced dry signal audio tracks for each instrumment) to correspnd as close as possible to the Altiverb stage positioning and then mix in as desired? Or do you use the wet/dry mix on each instance of Altiverb instead to do that? Is there any real difference in the two methods?

    Also, as far as the TAIL is concerned does having stage postioning switched on have any affect on it? (for example a delay introduced to the tail the further back you go in stage positioning) ....... in which case could that be why having stage pos. AND tail switched on for each instance of Altiverb sounds better than one general tail for the whole mix?

    I've found when I listen to my VSL tracks with no reverb at all (with or without panning) it sounds tonally good but obviously flat and dry as you would expect. If I listen to just the stage positioning mix on its own with no tails or direct signal it sounds very convincing spatially but quite dull tonally. When I start to then mix in the directs (all panned appropriately) I get back some of that tonal detail (especially high end and 'bite') BUT start to lose that wonderful sense of real 3-D space again!

    Any shared wisdom much appreciated!

  • I use 4 or more instances of Altiverb-- only one will use the tail without the pre-delay on the master, but the others use the pre-delays without the tail.

    Each instrument group gets their own pre-delay stereo aux bus channel instance 100% wet. I like using omnis on the strings with a wide front stage position, and also omnis on the French horns. I'll use Epic along with the others (a4, a3, a2) for larger projects, but for French horn solos I'll use a cardioid with a far upstage position that "fits" sonically.

    I'll use only one instance for a reverb tail with the pre-delay disabled. This is often to just add enough "blossom" to the ambience rather than to saturate the mix.

    I dread using EQ, but it's an occasional necessity.

    Basically, it follows Maarten's tutorial video on the AudiEase site with additional modifications.

    If a solo instrument gets lost in the mix, such as a solo, I may choose to put it on its own track with a unique predelay. For some percussion, such as timp, the transients can lose their impact when placed upstage. A touch of compression can restore the attacks, but this must be used delicately to retain the naturalness of the samples.

    A touch of peak limiting on the master adds the final "glue". Those are the basics for me and there are many variations.

  • Since you said, that you basically follow Maarten´s tutorial. Am I right, that one exception, you forgot, is, that you are not using the stage positioning? What do you mean by "Epic (a4,a3,a2)"?

  • Wow-- a full year and a month have gone by!!

    I've had VSL for a while and bought Epic Horns as an add-on to the Pro Edition. "a4, a3, and a2" refer to the samples themselves where there are 4 players, three players, and two players respectively. I probably shouldn't have used the term Epic when referring to the new Cube, but it's sort of a force of long-term habit. But in Brass II, the 8 player samples are occasionally referred to parenthetically as "Epic".

    I use stage positioning, btw

  • Apologize for what might be a stupid question but when using stage positioning in AV - do you still power pan in VE?

    Rob 


  • Of late, I've only been trimming stereo widths in VE to avoid having to run separate instances of a stereo width trimmer on individual channels, but I run AV instances in my DAW's mixer channels, largely because I'd already established a workflow that way before VE was released. Another thing is that I'm on a Mac, and VE can get overwhelmed quite easily, I've discovered-- even crashing with fx and VSL instances running concurrently. Dunno why, but because I've been able to work with it at all I've not bothered to press the VSL team about it. The other reason why I hesitate to complain about it is that I don't hear other users talking about it that much. Of course, I'd love to put all aspects of VE to work, but I don't want to push it.


  •  Thanks jwl for the reply.  My workflow is similar.  I run four slaves and when done with a cue bounce (via lightpipe) to DAW where I have AV.

    Right now I am 'power panning' in VE on all four slaves then when each instrument (or group) is in the DAW follow Maarten's tutorial.  Finding a balance of the final master bus 'tail' is key as I see it (careful not to muddy up everything).

    Lots to learn on AV but I am liking the first results.

    Rob

    (Christian M. - you have had AV for a year now - any words of wisdom you can share?[:)]) 


  • last edited
    last edited

    @Rob Elliott said:

    Right now I am 'power panning' in VE on all four slaves then when each instrument (or group) is in the DAW follow Maarten's tutorial. 


    So you are using both AV stage positioning [for R-L pan/stereo width/front/back positioning] and VE's power pan [for R-L pan/stereo width adjustment] simultaneously? I'm still learning w/VSL and AV, but how does this method provide a better sounding result? How is this different from VE Power pan [to handle the R-L pan/stereo width] plus AV instances on an auxs fed by instrument tracks set to 'pre-fader send'---as shown in the Special Edition demo video on mixing? It sounds like VE and AV are duplicating the pan/stereo width settings. That makes me nervous for some reason--like they might not 100% match each other... thanks, Charles

  • You are right that they would be duplicating, and not the same way or amount.  You can get away with it probably though it is theoretically incorrect to do, at least as far as I can determine.

    I have come to the conclusion that it is absolutely wrong to use any dry signal.  I know that may sound crazy, but you are abusing the whole theory behind altiverb if you do.  Because you are no longer placing your source within the impulse, just the tail.  That is stupid (though of course doing something stupid sometimes works).  The entire sound should be processed by the convolution or you are crudely abusing the software as if it is some new-fangled version of a  hardware reverb unit.

    This includes the positioning.  The idea behind the stage positioning is to allow you to avoid using dry signal for positioning.  The sound is still convoluted, but it is in the place where the icons are located and has the frequency coloration of the impulse. 

    This is complicated by the fact that if your original sound is stereo, it already has placement embedded within it.  So in that case - a stereo source - you MUST NOT use stage positioning changes, but instead use the default location - or perhaps spread apart further  - but NOT adjusted from one side to another, because that is an extremely complex distortion of the audio.  The most direct use of the convolution principle, in other words the most accurate, is in putting the already panned dry source through the convolution processing.  If you have a mono source - which almost no one uses apparently - then you can scoot it all over the stage with no problems.  Also, if you have an unpanned stereo ensemble, you could move it on the stage as well - for example with a violin ensemble that you are shrinking down in size and placing over a little to one side.  But anything with positioning already in place is being redundantly re-positioned by using stage position panning because the convolution will respond by itself to that positioning as the room did.  By monkeying with the stage position, you are distorting that response with a stereo source.    

    All this is further complicated by one other thing that bothers me - the fact that altiverb does something rather odd from the standpoint of orchestral performance.  Almost all of the impulses are recorded with multiple MICROPHONE positions but only one IMPULSE position.  This is exactly backwards of what is needed with convolution recordings for orchestral use.  Because what you really want is one microphone position ( the listener) and multiple impulse positions (the instrument or ensemble).  It was mentioned earlier in this thread that things started to sound phasey when mixing different impulses.  I have not noticed that, but it makes sense that would happen, because it is as if you are hearing the same concert hall several times when you mix impulses, instead of one concert hall with different impulses in it. Or it is almost as if your head were in multiple places as you sat in the hall !    The only exception to this weirdness is the Todd AO set, which has single microphone placements with multiple impulse locations.  That is perfect for orchestral use, but it is not the most beautiful reverb space unfortunately. 


  •  Of course that is only what I am thinking at the moment based on a recent 15-instance mix.  Tomorrow I will probably do something completely different.   Altiverb is complicated enough to use in many different ways.


  • last edited
    last edited

    Hi, 

    thanks for the detailed statement!

    @William said:

    The only exception to this weirdness is the Todd AO set, which has single microphone placements with multiple impulse locations. 

    By the above, do you mean "narrow mics"? because i don't see any single microphone placements...

    thanks


  • No, I meant how the narrow mics and the wide mics each have several impulses in which the distance between the impulse and the mic is varied by changing the source position in the room, instead of the mic position.   That is more like what is needed for positioning three dimensionally the sound in orchestral setups.  It is very artificial to use more than one impulse with the other approach.  I don't understand why ALtiverb has only one impulse set that does it this way.


  • ok, i see...

    (mircrophones are easier to move around than heavy speakers, i guess)[;)]

    very strange indeed!


  • Well, I don't know why it was done this way but I suspect it is a holdover from a past useage of convolution, in which the main point was to have one reverb, that you then manipulated in the traditional way with wet/dry mixes.  ALtiverb developed the stage positioning technique also, which accomplishes different placement of the source.  But the problem with that is it's artificial manipulation of the sound.  Whereas with a simple movement of the sources during the original recording, you can do it totally naturally.  And I thought that is the whole point of convolution!!!!   That is what is puzzling to me.

    Not that I hate Altiverb. It sounds really great but because of that it makes you want to use it absolutely perfectly.   

    Of course the whole philosophy behind MIR is to do convolution in this newer way, capturing the multiple source placement effects and letting them remain pristine and "natural" (if you can really use that word in this subject) as far as possible.