Apologize for what might be a stupid question but when using stage positioning in AV - do you still power pan in VE?
Rob
194,134 users have contributed to 42,912 threads and 257,923 posts.
In the past 24 hours, we have 4 new thread(s), 18 new post(s) and 77 new user(s).
Apologize for what might be a stupid question but when using stage positioning in AV - do you still power pan in VE?
Rob
Of late, I've only been trimming stereo widths in VE to avoid having to run separate instances of a stereo width trimmer on individual channels, but I run AV instances in my DAW's mixer channels, largely because I'd already established a workflow that way before VE was released. Another thing is that I'm on a Mac, and VE can get overwhelmed quite easily, I've discovered-- even crashing with fx and VSL instances running concurrently. Dunno why, but because I've been able to work with it at all I've not bothered to press the VSL team about it. The other reason why I hesitate to complain about it is that I don't hear other users talking about it that much. Of course, I'd love to put all aspects of VE to work, but I don't want to push it.
Thanks jwl for the reply. My workflow is similar. I run four slaves and when done with a cue bounce (via lightpipe) to DAW where I have AV.
Right now I am 'power panning' in VE on all four slaves then when each instrument (or group) is in the DAW follow Maarten's tutorial. Finding a balance of the final master bus 'tail' is key as I see it (careful not to muddy up everything).
Lots to learn on AV but I am liking the first results.
Rob
(Christian M. - you have had AV for a year now - any words of wisdom you can share?[:)])
Right now I am 'power panning' in VE on all four slaves then when each instrument (or group) is in the DAW follow Maarten's tutorial.
You are right that they would be duplicating, and not the same way or amount. You can get away with it probably though it is theoretically incorrect to do, at least as far as I can determine.
I have come to the conclusion that it is absolutely wrong to use any dry signal. I know that may sound crazy, but you are abusing the whole theory behind altiverb if you do. Because you are no longer placing your source within the impulse, just the tail. That is stupid (though of course doing something stupid sometimes works). The entire sound should be processed by the convolution or you are crudely abusing the software as if it is some new-fangled version of a hardware reverb unit.
This includes the positioning. The idea behind the stage positioning is to allow you to avoid using dry signal for positioning. The sound is still convoluted, but it is in the place where the icons are located and has the frequency coloration of the impulse.
This is complicated by the fact that if your original sound is stereo, it already has placement embedded within it. So in that case - a stereo source - you MUST NOT use stage positioning changes, but instead use the default location - or perhaps spread apart further - but NOT adjusted from one side to another, because that is an extremely complex distortion of the audio. The most direct use of the convolution principle, in other words the most accurate, is in putting the already panned dry source through the convolution processing. If you have a mono source - which almost no one uses apparently - then you can scoot it all over the stage with no problems. Also, if you have an unpanned stereo ensemble, you could move it on the stage as well - for example with a violin ensemble that you are shrinking down in size and placing over a little to one side. But anything with positioning already in place is being redundantly re-positioned by using stage position panning because the convolution will respond by itself to that positioning as the room did. By monkeying with the stage position, you are distorting that response with a stereo source.
All this is further complicated by one other thing that bothers me - the fact that altiverb does something rather odd from the standpoint of orchestral performance. Almost all of the impulses are recorded with multiple MICROPHONE positions but only one IMPULSE position. This is exactly backwards of what is needed with convolution recordings for orchestral use. Because what you really want is one microphone position ( the listener) and multiple impulse positions (the instrument or ensemble). It was mentioned earlier in this thread that things started to sound phasey when mixing different impulses. I have not noticed that, but it makes sense that would happen, because it is as if you are hearing the same concert hall several times when you mix impulses, instead of one concert hall with different impulses in it. Or it is almost as if your head were in multiple places as you sat in the hall ! The only exception to this weirdness is the Todd AO set, which has single microphone placements with multiple impulse locations. That is perfect for orchestral use, but it is not the most beautiful reverb space unfortunately.
Hi,
thanks for the detailed statement!
@William said:
The only exception to this weirdness is the Todd AO set, which has single microphone placements with multiple impulse locations.
By the above, do you mean "narrow mics"? because i don't see any single microphone placements...
thanks
No, I meant how the narrow mics and the wide mics each have several impulses in which the distance between the impulse and the mic is varied by changing the source position in the room, instead of the mic position. That is more like what is needed for positioning three dimensionally the sound in orchestral setups. It is very artificial to use more than one impulse with the other approach. I don't understand why ALtiverb has only one impulse set that does it this way.
Well, I don't know why it was done this way but I suspect it is a holdover from a past useage of convolution, in which the main point was to have one reverb, that you then manipulated in the traditional way with wet/dry mixes. ALtiverb developed the stage positioning technique also, which accomplishes different placement of the source. But the problem with that is it's artificial manipulation of the sound. Whereas with a simple movement of the sources during the original recording, you can do it totally naturally. And I thought that is the whole point of convolution!!!! That is what is puzzling to me.
Not that I hate Altiverb. It sounds really great but because of that it makes you want to use it absolutely perfectly.
Of course the whole philosophy behind MIR is to do convolution in this newer way, capturing the multiple source placement effects and letting them remain pristine and "natural" (if you can really use that word in this subject) as far as possible.