Same general subject, but related to a specific project.
I'm doing a rather complex rendering of a Beethoven quartet. In the 1st violin part, I've used many articulations, and have noticed some profound phychoacoustic effects when the articulations change. In particular, I'm finding that where either release samples, or legato transitions occur, there can be quite a strong spatialization effect, whereby the instrument almost seems to change acoustic environments. As I understand it, this is basically because, with totally "dry" recordings, the only natural decays we hear are those occurring within the instrument itself, and that these are being (mis)interpreted by our ears as distinct spatial cues (i.e., reverb). But how do we get around this problem? I know I can do it by simply selecting a different articulation, but that doesn't make musical sense in this particular case.
I've also just read the thread on "Understanding reverb", which is great, and I will try some of the techniques suggested, but are there any other approaches I can try in dealing with the issue of spatial "jumps"?
Things I've tried:
1) narrowing the stereo field using Logic's Direction Mixer. (helps -- particularly with the psychoacoustic "panning" effects generated by different pitches: a subject covered before in the forum)
2) applying a very small room IR before any "hall". (seems to help... but I could be crazy)
3) changing sample-dynamic levels (somtimes helps -- occasionally I find that, with normalized samples, even the dynamic can influence the spatial perception (specifically distance). I suspect that this is a result of the disparity between the percevied loudness, and the loudness cues imparted by the spectral content of the sample -- almost like distance and depth illusions created by brightness and size/perspective in painting.)
Since these sorts of spatial 'illusions' are basically a trick of the ears and brain, I'd imagine they can be worked-around, but I'm not sure how. Is this part of what the MIR will use meta-data to accomplish? If so... well... please build the proprietary VSL super-computer and release it!
Any and all thoughts greatly appreciated.
J.
I'm doing a rather complex rendering of a Beethoven quartet. In the 1st violin part, I've used many articulations, and have noticed some profound phychoacoustic effects when the articulations change. In particular, I'm finding that where either release samples, or legato transitions occur, there can be quite a strong spatialization effect, whereby the instrument almost seems to change acoustic environments. As I understand it, this is basically because, with totally "dry" recordings, the only natural decays we hear are those occurring within the instrument itself, and that these are being (mis)interpreted by our ears as distinct spatial cues (i.e., reverb). But how do we get around this problem? I know I can do it by simply selecting a different articulation, but that doesn't make musical sense in this particular case.
I've also just read the thread on "Understanding reverb", which is great, and I will try some of the techniques suggested, but are there any other approaches I can try in dealing with the issue of spatial "jumps"?
Things I've tried:
1) narrowing the stereo field using Logic's Direction Mixer. (helps -- particularly with the psychoacoustic "panning" effects generated by different pitches: a subject covered before in the forum)
2) applying a very small room IR before any "hall". (seems to help... but I could be crazy)
3) changing sample-dynamic levels (somtimes helps -- occasionally I find that, with normalized samples, even the dynamic can influence the spatial perception (specifically distance). I suspect that this is a result of the disparity between the percevied loudness, and the loudness cues imparted by the spectral content of the sample -- almost like distance and depth illusions created by brightness and size/perspective in painting.)
Since these sorts of spatial 'illusions' are basically a trick of the ears and brain, I'd imagine they can be worked-around, but I'm not sure how. Is this part of what the MIR will use meta-data to accomplish? If so... well... please build the proprietary VSL super-computer and release it!
Any and all thoughts greatly appreciated.
J.