eric-- thanks for the thoughts. I was on a Digital Performer forum just yesterday where there was a discussion about convolution vs high-end hardware reverbs. Someone made a point that seems to support your thoughts from a whole different angle... With the understanding that the user brings many variables to the table (taste, ability, experience, or lack of the above) that software reverbs tend to just "layer" ambience on top of sound rather than truly interact with the sound. It's that interaction which the brain and ears recognize or fail to recognize, as the case may be. Granted, the discussion is being taken out of one context and inserted into another, so there may be all sorts of arguments with this that I'd like to avoid.
The point is that you've made some extrememly keen observations, the vesitges of which I'm hearing repeatedly in other conversations about reverb behavior in general. I know that some of this discussion is probably better suited for the "post production" forum, but this psycho-acoustical issue has never been more important than it is now.
The brain indeed "knows" the difference between a bathroom, kitchen, livingroom, or even an outdoor environment. Sometimes we make the error of listening with our brains and not our ears in the same way we mix with our eyes and not our ears. There is a fatigue factor to boot-- deadlines mean long hours, and what we "think" we hear at the end of harrowing day sometimes is not accurate, even if it's the best we can do under the circumstances.
I love mixing and I hate mixing. For as complex as VSL has become to set up for sequencing, actual note entry (which was once the most time consuming part of the process) has become the easiest and fastest thing to do. For every hour of programming and sequencing I seem to spend at least half again as much time with the mix.
Much of this time is spent trying to translate from acoustical experience why and how samples and *some* convolution reverbs behave differently than sampled sounds with outboard reverbs, acoustic sounds in reverberant spaces, synthetic sounds with reverbs of all types, etc., etc. Add to this the fact that some mp3 compression can squeeze the "air" out of a good recording and the necessity to ('over-'?) compensate for this occasional anomaly only complicates the process (for me, anyway).
I put this epistle here because I have just heard a third mix of DG's "Faltering" string demo with a different reverb setting which splits the difference between the first two versions. I like it because most of my work tends to be in the concert hall and that "in your face" orchestral sound tends to be more typical of certain film soundtracks-- or experienced only by the players and not the audience. I suppose it depends again on taste, context, and need which approach to take with environmental ambience.
Well, all this verbage probably has a ways to go before it draws any definitive conclusions, but somehow I feel a step closer to defining something which has been much more visceral and elusive in the way of removing or avoiding elements of a mix that distract the listener from the music itself.
Thanks for making me think-- and listen!
The point is that you've made some extrememly keen observations, the vesitges of which I'm hearing repeatedly in other conversations about reverb behavior in general. I know that some of this discussion is probably better suited for the "post production" forum, but this psycho-acoustical issue has never been more important than it is now.
The brain indeed "knows" the difference between a bathroom, kitchen, livingroom, or even an outdoor environment. Sometimes we make the error of listening with our brains and not our ears in the same way we mix with our eyes and not our ears. There is a fatigue factor to boot-- deadlines mean long hours, and what we "think" we hear at the end of harrowing day sometimes is not accurate, even if it's the best we can do under the circumstances.
I love mixing and I hate mixing. For as complex as VSL has become to set up for sequencing, actual note entry (which was once the most time consuming part of the process) has become the easiest and fastest thing to do. For every hour of programming and sequencing I seem to spend at least half again as much time with the mix.
Much of this time is spent trying to translate from acoustical experience why and how samples and *some* convolution reverbs behave differently than sampled sounds with outboard reverbs, acoustic sounds in reverberant spaces, synthetic sounds with reverbs of all types, etc., etc. Add to this the fact that some mp3 compression can squeeze the "air" out of a good recording and the necessity to ('over-'?) compensate for this occasional anomaly only complicates the process (for me, anyway).
I put this epistle here because I have just heard a third mix of DG's "Faltering" string demo with a different reverb setting which splits the difference between the first two versions. I like it because most of my work tends to be in the concert hall and that "in your face" orchestral sound tends to be more typical of certain film soundtracks-- or experienced only by the players and not the audience. I suppose it depends again on taste, context, and need which approach to take with environmental ambience.
Well, all this verbage probably has a ways to go before it draws any definitive conclusions, but somehow I feel a step closer to defining something which has been much more visceral and elusive in the way of removing or avoiding elements of a mix that distract the listener from the music itself.
Thanks for making me think-- and listen!