I agree. There's no other way to "localize" the events occuring across a 180-degree panorama-- to keep them from interacting in different ways within the LR image, hence bouncing around the stereo spectrum unnaturally-- than to narrow the stereo width.
The pre-delay question may never be fully solved under certain circumstances with these libraries. There are challenges with both VI and EWQL for the reasons William cited-- almost opposite reasons, ironically.
Add to this-- within one's effort to acheive the most natural sounding environmental blend, there is a certain unnatural quality we must learn to live with to a larger degree than we may suspect: it's the notion that electronic reproduction of live instruments has an inherent unnaturalness. Flute solos soar over an entire orchestra on a recording whereas they are much more distant in a live context. What's natural for live listening may be quite unnatural for electronic registration-- and vice-versa. There are sonic elements we've come to accept in recorded media for good or ill.
Further, regardless of live or studio listening contexts, mics are rarely placed in positions with much regard to the most natural human listening experiences. Even when a live concert is recorded-- the mic(s) is/are placed often above, at or below the instruments at distances that *rarely* match those of human ears. While using live listening experiences as a benchmark, doing so in a surround recording may be easier to the result of replication of the live listening experience, but as long as a gaggle of mics will be folded down into a stereo mix from a mixing booth, a certain "naturalness" is lost and must be redefined. The listening position for film scores *becomes* the booth itself.
Where VSL leaves so much to the user in terms of shaping the environment in post with reconciling the silent stage ambience to virtual ambience, EWQL indeed addresses the issue by its multi mic positionings. At its best, there are fewer haggles with phase and environmental matching. At it's worst, what is perceived as "consistency" for the sake of "ease and speed of use" quickly crosses the line into areas of "sameness" where sameness from one mix (large orchestra) to the next (chamber ensemble) is not desired.
Moreover, regardless of the speaker array (stereo or surround), the ears remain in stereo! It is most fascinating to read the different methods users are finding to deal with this issue. Ambience programming has once again claimed greater importance during a time it has perhaps been taken for granted as a fairly mindless back-end addition. Ambience programming for mixing virtual orchestras in general is largely an undocumented science, one that is being hashed out in forums such as this one.
I'm not certain if MIR is the magic bullet for as wonderful as it looks-- because so much hinges on the users' understanding of acoustical behavior. The same understanding of how ambience is to be applied with a mega TC or Lexi-verb box remains. I'm not sure if I can afford MIR or if I can afford *not to* have MIR (which may be a moot point since its operation on the Mac platform has not been entirely resolved at this time).
In any case, ambience programming has taken on a new level of importance and I am eager to follow users' solutions as I formulate my own theories and solutions in this area.
Thanks for keeping the discussion alive.
The pre-delay question may never be fully solved under certain circumstances with these libraries. There are challenges with both VI and EWQL for the reasons William cited-- almost opposite reasons, ironically.
Add to this-- within one's effort to acheive the most natural sounding environmental blend, there is a certain unnatural quality we must learn to live with to a larger degree than we may suspect: it's the notion that electronic reproduction of live instruments has an inherent unnaturalness. Flute solos soar over an entire orchestra on a recording whereas they are much more distant in a live context. What's natural for live listening may be quite unnatural for electronic registration-- and vice-versa. There are sonic elements we've come to accept in recorded media for good or ill.
Further, regardless of live or studio listening contexts, mics are rarely placed in positions with much regard to the most natural human listening experiences. Even when a live concert is recorded-- the mic(s) is/are placed often above, at or below the instruments at distances that *rarely* match those of human ears. While using live listening experiences as a benchmark, doing so in a surround recording may be easier to the result of replication of the live listening experience, but as long as a gaggle of mics will be folded down into a stereo mix from a mixing booth, a certain "naturalness" is lost and must be redefined. The listening position for film scores *becomes* the booth itself.
Where VSL leaves so much to the user in terms of shaping the environment in post with reconciling the silent stage ambience to virtual ambience, EWQL indeed addresses the issue by its multi mic positionings. At its best, there are fewer haggles with phase and environmental matching. At it's worst, what is perceived as "consistency" for the sake of "ease and speed of use" quickly crosses the line into areas of "sameness" where sameness from one mix (large orchestra) to the next (chamber ensemble) is not desired.
Moreover, regardless of the speaker array (stereo or surround), the ears remain in stereo! It is most fascinating to read the different methods users are finding to deal with this issue. Ambience programming has once again claimed greater importance during a time it has perhaps been taken for granted as a fairly mindless back-end addition. Ambience programming for mixing virtual orchestras in general is largely an undocumented science, one that is being hashed out in forums such as this one.
I'm not certain if MIR is the magic bullet for as wonderful as it looks-- because so much hinges on the users' understanding of acoustical behavior. The same understanding of how ambience is to be applied with a mega TC or Lexi-verb box remains. I'm not sure if I can afford MIR or if I can afford *not to* have MIR (which may be a moot point since its operation on the Mac platform has not been entirely resolved at this time).
In any case, ambience programming has taken on a new level of importance and I am eager to follow users' solutions as I formulate my own theories and solutions in this area.
Thanks for keeping the discussion alive.