Vienna Symphonic Library Forum
Forum Statistics

196,691 users have contributed to 43,029 threads and 258,427 posts.

In the past 24 hours, we have 5 new thread(s), 8 new post(s) and 92 new user(s).

  • Loss of sound when panning

    This is something that could appear in the more engineering-biased forums, but I'd like to draw this subject to Herb's attention, because it concerns a potential loss of quality to the VSL sounds in cases where composers use GS (and I suppose EXS24).

    The thing is this: panning instruments in GS results in a varying loss of either the L or the R half of the original stereo recording, a recording done with such attention to quality at VSL. Panning using panpots of my Mackie d8b would be better perhaps than in GS, but then I tie up the Mackie's channels to pan positions.

    Using Waves S1 is supposed to be able to restore stereo placement, but that's no use for 'live' realtime MIDI going into hardware inputs of a console.

    Will MIR help with this in any way?

    Would it be a horrendously complicated and laborious job to batch process some of the key instruments in the orchestra, say the string sections, to their most common stereo placement? Obviously at a price.

  • MIR almost certainly will solve this.

    The intelligent way to pan, and why no one has yet implemented this is beyond me, is to cause a slight phase delay in the two channels that is proportional to the phase delay between two stereo microphones or, more commonly, human ears. This operation is extremely light on the CPU. Why it's not done, and instead volume is panned (which is artificial sounding) is beyond me.

  • What more can you tell us about MIR?

    Could it be used within GS as a plug-in?

  • All I know about MIR is that it is physical acoustic modelling and will be amazing. I've been screaming for GS to have something like this for two years now. No reason a synthesizer must ALWAYS work in real time (there can certainly be realtime preview) if the spatial qualities of the sound can be perfectly replicated.

    As for how it will be implemented (part of GS, etc.) I have no idea. I myself hope it will be a stand alone application complete with a new version of VSL (which I also hope we can crossgrade to), specially designed for this. I would love to throw GS away one day.

  • last edited
    last edited

    @Another User said:

    MIR almost certainly will solve this.

    The intelligent way to pan, and why no one has yet implemented this is beyond me, is to cause a slight phase delay in the two channels that is proportional to the phase delay between two stereo microphones or, more commonly, human ears. This operation is extremely light on the CPU. Why it's not done, and instead volume is panned (which is artificial sounding) is beyond me.


    MIR is a totally different subject, but the problem with delay-based panning is that it doesn't collapse well to mono. That's why you don't see it on every mixer in the world.

    And while I don't agree that standard volume panning sounds artificial, the problem with it is that it changes as soon as you move your head a fraction of an inch.

  • Nick - do you mean 'correct', stereo preserving panning has more tolerance of a listener's position?
    Jim

  • MIR must intrinsically use delay-based panning though, and a host of other physical modelings, which all combined would probably not casue a stereo-mono conversion. I mean it must measure the time it takes sound to travel to the different microphones, wherever you place them.

    But you know, combining a stereo recording to mono simply by wave averaging also sounds bad, to me. I always prefer to just take either the L or R when going from stereo to mono.

  • jrm1, if you pan a signal hard left and a delayed copy of the signal hard right, say .4 milliseconds, the sound will come from somewhere around 45 degrees to the left. If you take the same two signals (original and delayed) and sum them to mono, you have bad comb filtering problems. Delay-based panning produces a very stable image, but that's the problem with it.

    I wouldn't say that amplitude-based panning is any less "correct" than delay-based panning, since we use both to localize sounds.

  • jsut use the DSP mixer in Giga Studio. You cant need more than 32 channels? Use a combination of standard MIDI (power) panning, and the DSP (true stereo) panning.

    this is a response in regard to the quality of recording in VSL, not driectional/placement conerns

  • That's interesting, King - I had no idea the DSP mixer did anything useful, so I've been using it minimally, with four of its channels as masters for four stereo submixes (corresponding to the eight outs on my sound card). By "true stereo," I assume you mean that they're stereo balance controls?

  • yup stereo balance, seperate panning for each left/right channel

  • King, I'm assuming that the DSP panning controls in GS2.5 are the exact equivalent to the panning in my channel strips on my d8b hardware mixer, to which my soundcard is connected. If that is so, is GS's DSP panning a bit redundant when the same thing can be done on the hardware mixer? What I'm wanting is panning flexibility for each of the 64 MIDI channels in GS.

    Take this for example: I might need a drum kit with its cymbals panned left and right, its kick in the centre and so on, going out on outputs 1&2; then this to be shared with another instrument going out o/p 1&2 but with a pan to hard left or right. That scenario could be needed often, when I consider that 64 different instruments have to be shared across only 4 stereo outs. But it's impossible if you have to do the panning in GS's DSP manager or the hardware mixer - there's just too much 'grouping' for that kind of flexibility.

    If GS could allow true stereo panning on each MIDI channel or instrument, then I'd be happier. Perhaps GS3 will allow this?

  • Well, The DSP mixer allows for much more grouping and outputting before the mixer stage. If you're stuck with 4 outputs, you can atleast get 16 stereo channels of subgrouping BEFORE the output stage to your mixer

    yup there is a bunch of grouping in the DSP mixer for sure, but surely with regards to VSL, you can atleast group specific instruments together in the DSP Mixer before the output stage. With specific panning associated to those groups of instruments (take strings for example, short bows and long bows, and effects panned to the specific places you need via a grouping to a stereo DSP mixer channel)

    There's no way to get true stereo panning control on every 64 channels of giga (which you only need if you're loading a completely different instrument, with completely different panning placement on every 64 of the channels. I find this pretty hard to see happening, especially with a library like VSL.

    You can do the scenerio you're talking about. The DSP mixer is a set of 16 stereo mixing channels that can have any MIDI channel routed to them. So you can have the cymbals routed to DSP mixer 1,2 - the Kick and Snare routed to DSP Mixer 3,4 - and the third instrument to DSP Mixer 5,6. Then set each of these DSP Mixers to output to output 1,2.

    The DSP Mixer is like having a hardware mixer in line BEFORE your outputs. Which makes the outboard mixer redundant, not the DSP Mixer.

  • Thanks, King. I'll delve into GS again.