Vienna Symphonic Library Forum
Forum Statistics

193,997 users have contributed to 42,905 threads and 257,892 posts.

In the past 24 hours, we have 4 new thread(s), 17 new post(s) and 92 new user(s).

  • Angelo, I guess most sample library users use a plugin like Waves S1 (or anything similar) to drastically reduce the stereo width of violins, violas, celli and contrabasses. Wouldn't this reduce your problem to something hardly noticeable?

  • Peter Roos...

    No! No engineer would ever do that when mixing, because it's very
    likely a psychoacoustic algorithm behind such a stereo width
    plugin which would alter not only the true stereo field, but introduce
    a psychoacoustic effect into it. Well, you could from a "stereo fader"
    send the left and the right side seperatly to a single buss to overgo
    the problem, while you would mute the "stereo fader".

    That's the right way
    You assign the L+R of a stereo signal to two single channel fader.
    Then you can control the level of each side seperately, plus you
    can narrow both side also seperatly with TWO pan knob, plus you
    can eq L+R seperatly. All that can't be done with a so called
    "stereo fader".

    Therefor, a sampler players software where you can't split the
    stereo sample to two track faders is practically useless, or at
    least reduces you possibilities drastically. Sometimes it would
    be good that programmers would work as engineers for a couple
    of years before they make sample player software.

    .

  • Hi Angelo,

    Never too old to learn... [[;)]] (this week turning 47).

    Let me rephrase what I said: most semi-pro's and wannabe's use tools like this... [[;)]] hehe

    I'm aware that a single pan on a stereo channel doesn't work, because you lose important signal that you actually want to reposition, not attenuate.

    Gigastudio and Cubase SX also have tools to reduce width and pan the stereo signal, I believe visualized with a L and R bar that you can position in a left to right field.

    Do you believe these tools have the same undesirable side-effects?

    Very open to suggestions; thanks for your contributions on VSL!

  • last edited
    last edited

    @Peter Roos said:

    Hi Angelo,
    Gigastudio and Cubase SX also have tools to reduce width and pan the stereo signal, I believe visualized with a L and R bar that you can position in a left to right field.

    Do you believe these tools have the same undesirable side-effects?


    I hear the StereoSpread plugin from Steinberg doesn't introduce
    a psycho algorithm to the signal when you narrow the stereo field,
    it sounds the same as if you would pan towards the center on two faders.

    But it certainly does, when you pull the knob direction wide!

    Here again, why can't we pull each channel seperatly, respectively unlook
    them.

    I have to read the Kontakt manual now for the first time, to see if there
    is a possibility the way i want it.

    By the way this imperfection is only present with VSTi faders, not with
    audio track faders who can be split to two fader. And you better
    decouple, respectively disconnect the stereo track to seperate single tracks splits.

    .

  • Waves' S1 is clean as long as you don't touch the "Shuffler"-Parameters. If you use Cubase SX/Nuendo with dedicated stereo-tracks, you have options for seperate L/R-panners with the ability to keep a relative stereo-base while moving the whole signal from lef tto right, if you're hesitate to use a plug-in. No psycho-acoustics here, just plain pan-laws.

    /Dietz - Vienna Symphonic Library
  • Also, by definition the level is going to be different in each channel of a stereo recording, no?

  • Yes, different!

    But the goal is to have near equal level in a natural stereo recording right from the beginning when you record. If the L/R levels are too far apart, a power-balancing can be done.

    .

  • But a 4dB difference seems completely normal.

    By the way, I like the way Giga 3 deals with levels and panning - concentric width and tilt controls, plus a stereo fader.

  • last edited
    last edited

    @Nick Batzdorf said:

    But a 4dB difference seems completely normal.


    Over a long average rather not, but as long you can re-balance the power without affecting the integrity of the stereo field, it's no problem.

    On the other subject single fader vs. stereo fader; it would be really a advantage to have a stereo sample player on two single fader, to have all possible control over the stereo field.

    .

    Here a few words on stereo in general from my engineer:

    First, I try to think of the "Stereo Space" as a piece of musical reality. Once we have acquired that concept, we can conversely, also think of the "Stereo Space" as a piece of musical fantasy. Whether or not it could exist in nature, or in a natural acoustical environment, is irrelevant. Most of the "Stereo Spaces" in my recordings, began their life in my imagination...

    I think of my stereo sound-field as a sonic sculpture...

    I always try to make my stereo sound-field far more than merely two-channel mono. In other words, I always try to make my stereo sound-field multi-dimensional, not merely left, center and right. For me to be satisfied with a sound-field, it must have the proportions of left, center, right and depth.

    Since the middle 1960’s I think my philosphical approach to using the "Stereo Space", has been to take the listener into a “New Reality” that did not, or could not, exist in a real life acoustical environment. This “New Reality”, of course, existed only in my own imagination. What I mean is, that before what I call “The Recording Revolution”, our efforts were directed towards presenting our recorded music to the listener in what amounted to an essentially unaltered, acoustical event. A little “Slice of Life”, musically speaking.(This “Recording Revolution” took place from 1950 through 1970) This was not true just of myself, but was also true of many of the people that were interested in the same things that I was. We all experienced this same “Recording Revolution”. After that change in our basic music recording objective, along came the “New Reality” in using the "Stereo Space".

    ________________________________________________________________

    Understand first of all, that much of what I do in the studio is based on stereophonic microphone technique. True stereopohnic microphone technique. My favorite stereo mike technique is, of course, Blumlein Pair.

    I’ve always felt that music mixing is, in reality, an extension of arranging. I think that gut reactions translated to music recordings are the most believable. Therefore, it follows that music mixing has to be entirely instinctive and intuitive. To be working on a piece of music, and then having to stop the creative flow, to think through a technical function, is absolutely impossible for me.

    The comprehensive strength of a powerful automation system enhances my creative energy by providing new working options.

    One of the features I always look for in any desk is if it has the capacity to free my creative process. I'm much more impressed by being able to put myself and my imagination into the music, than I am about any specific technical feature on the desk itself.

    Beyond the technology and the studio environment, is the love of music itself that defines my approach to my projects. I think music is really the only true magic in life. If I can't put my imagination into the music and create a sound field that exists first in my mind, and is not necessarily pre-thought, I will most definitely have a difficult time with a project. If the technology gets in the way of my imagination, I get very quickly bored and I’ll frequently start yawning. For me, as you can see, reacting to the sound of the music is enormously important.

    ________________________________________________________________

    We'll talk first about one of the most important aspects of musical sound. This one component of musical sound will lead us to a basic understanding of what happens when we record music.

    That subject matter is Timbre...

    By definition Timbre means; "The quality of a sound that distinguishes it from other sounds of the same pitch and volume." In other words, the distinctive tone of a musical instrument or voice.

    To look at the timbre of a musical instrument scientifically we find that the timbre of a musical sound is a result of the complex combination of two entities:

    #1- The fundamental note played by that instrument, or voice, together with its multi-directional overtones, and...

    #2- The pattern of early room reflections resulting from that note and its overtones.

    Think about that for a minute: it is all happening in the first few milliseconds in the life of a sound, before the onset of reverberation and it is very, very complex, far beyond the reach of any conventional math (although the new science of chaos is revealing there may be ways of dealing with such complexity).

    If the microphones are too close to the sound source the early room reflections will be lost.

    These early room reflections are an often neglected, but very important component of musical sound quality.

    If the mics are placed too far from the sound source the later arriving reverberation will mask the actual sound source it self... (this explains why some seats in a concert hall are better than others).

    Here's a good example; If I were looking for a very, 'breathy' or sensuous’, vocal recording 'sonic image', for instance, I would place the singer as close as is physically possible to a single microphone, thereby eliminating almost all early reflections. I would even use no windscreen, if possible.

    In his book "Spatial Hearing" Blauert says our hearing system deals with the complexity of the human hearing process by employing something known as "spectral incoherence".

    If you were to analyze a sound as it entered each ear canal you would find the expected differences in frequency and phase response between the two ears. But this really doesn't tell us all that much.

    When we look at a sound as two separate sonic events,(time and frequency) you will see there are similarities as well as differences (i.e. 50% correlation) which allow the two ears to work together to form a spatial impression, not neccessarily directional because we are dealing with multiple events not just a single sound source.

    This quality, or I really prefer to call it an ability in our hearing, I think plays a major role in what I like to call the "cocktail party effect". This means, that a person with two good ears can pick out one conversation from many, while a person with only one working ear cannot. Along that same line of thought, have you ever noticed how, when you monophonically record a person speaking in a reverberant room, (Like on a small mono cassette recorder) the recording sounds highly reverberant and essentially unintelligible. Yet when you are in that very same room with that person speaking, you can understand every word. This is, of course, our binaural hearing system in action. Our two good ears connected to our brain are able to separate the direct sound from the refelected sound and give us a highly intelligible sound image. A monophonic microphone and tape recorder cannot.

    What I’m really talking about is that when we listen, we hear sounds with two ears, and thus utilize our binaural hearing. The sound waves reaching the two ears will usually not be identical. For low frequency sounds, of long wavelength(compared to the size of the head), there will generally be a phase difference due to the slightly different arrival times of the sounds at the two ears. For high frequency sounds, of short wavelength, there will also be an intensity difference due to the fact that one ear is farther from the source of the sound and is also in the sound 'shadow' of the head. However, despite these differences, we usually hear only one sound. This I call binaural 'fusion'. In the processing of these sounds, however, the brain utilizes these differences to enable to tell us what direction the sound is coming from. This process is called "Localization". "Localization" is, in reality, the basis for the stereophonic effect in music recording.

    The real trick is to figure out how we can use all this esoteric and scientific knowledge and make it help us in recording music...

    Stereo recording, as it exists today, can do a pretty good job of conveying spectral incoherence, or ‘Localization’ to a listener, if the recordist uses good microphone choice and placement. Such microphone choice and placement technique implies numerous conditions: (I'll give you a couple of the conditions that I think are the most important.)

    The predicament with monophonic single microphone technique or placement is; (especially when it will be used as a single source point in a stereo mix) that we could say that since you only have one channel there is no incoherence, so you might as well get close and not risk the "off mic sound". That statement, of course, is gross generalization. It also runs the risk of seriously limiting our creativitly in certain recording situations.

    There, that's a start........

    ________________________________________________________________


    Question Angelo:
    so your blumlein track is on a stereo fader, and if
    you wanna pan, you just turn the pan knob till the
    source comes from where you wanna place it in the
    stereo field ? Is there anything we have to consider
    panning a blumlein stereo track, who could lead to
    negative consequences with a blumlein stereo track?

    Do you pan a blumlein stereo track at all, or is that
    a fixed stereo field you never alter with panning, and
    the horizontal only placement is done in another way,
    i.e. at what angle the singer/instrumentalist stands to
    the blumlein pair when recorded ?

    Answer: Bruce Swedien
    NO!!! NEVER A STEREO FADER!!!!! THERE IS NO SUCH THING AS A STEREO FADER!! So called stereo faders are always out of balance. Don't use them!!!! If one side of the microphone feeding a stereo fader is 1 db low it's image will always be off!!!

    .

  • cont.

    I must first explain that I do not do every stereo recording as a Blumlein Pair. In a mix where the main stereo sound field is supplied by either a Blumlein Pair or a Decca-Tree three micropnone set up, I will use what is called "Point-Source" microphones to spot or re-inforce a solo or an instrument that I want to feature.

    A thorough understanding of the Decca-Tree microphone system is very important....

    The Decca Tree is a technique of recording that grew out of Decca's research and development into stereo which started in 1954.

    Decca Records has a long tradition of developing their own methods and technology, and so they set out to develop their own method of recording stereo as well as developing their own proprietary designs of mixing consoles and other recording equipment.

    The use of the three microphone technique that has come to be known as the "Decca Tree" grew out of the desire to maximize the clarity and depth of opera and orchestral recordings. The actual "tree" is a triangle of microphones is placed roughly ten to twelve feet above the stage level, above and just behind the conductor, arranged on a specially designed and constructed microphone stand. The orchestra’s image is adjusted so that the center mike goes equally to both left and right channels of the stereo buss. The right tree mike goes to the right stereo buss, and the left tree mike to the left stereo buss.

    When this technique was first used in 1954, the microphones used were Neumann KM 56s, tilted 30 degrees toward the orchestra. Other microphones were tried including the cardioid M 49 (in baffles), in 1955. Soloists with the orchestra are usually spot miked.

    The use of the tree has remained virtually unchanged since the '60s, although Decca engineers have made minor modifications to the microphone placement on the tree. In a typical Decca recording session, every effort is taken to find a suitable recording venue with desired reverberation characteristics.

    As for the spacing of the three mikes themselves, this varies with the venue used and with the size of the ensemble. The size of the triangle itself varies with the amount of width and spaciousness desired. Here I am adjusting one of the ‘Sweetener” mikes that I placed in the orchestra to bring out a part.

    Digest this a bit and then we will talk further....

    Bruce Swedien

    .

  • last edited
    last edited

    @Nick Batzdorf said:

    By the way, I like the way Giga 3 deals with levels and panning - concentric width and tilt controls, plus a stereo fader.


    Yes very nice feature which I would love to see in VI. I'm currently trying to figure how to narrow the stereo field of certain instruments. Using the Surround pan is very tricky and time consuming.

  • last edited
    last edited

    @Another User said:

    The predicament with monophonic single microphone technique or placement is; (especially when it will be used as a single source point in a stereo mix) that we could say that since you only have one channel there is no incoherence, so you might as well get close and not risk the "off mic sound". That statement, of course, is gross generalization. It also runs the risk of seriously limiting our creativitly in certain recording situations.


    Since we're off to the races...you know how every proverb has an opposite? "He who hesitates is lost" and "Look before you leap," for example?

    Well, the opposite to the above is "If everything is stereo, nothing is stereo." That applies to pop mixes more than orchestra, but it's true. [:)]

    About Bruce "Expensive Mono" Swedien's comment: he's right, of course, but his only objection to stereo faders is that you don't have independent control over each channel's level. Tilt and width controls solve that problem with a more convenient interface, in my opinion.

  • Nick...

    have to add that Bruce never mixes on a computer, i render always the tracks for his pro tools which is hooked up to a Harrison console etc.

    .

  • That doesn't surprise me, because I know he's very much into the subtleties of different equipment. I remember from a visit to his studio in the outskirts of Los Angeles in the early '90s that he used different mastering formats for different songs - DAT for one, different reel-to-reels at different speeds for others, etc.

    I also remember him playing a Michael Jackson mix that started with a jet engine at a level that really frightened me! This was to demonstrate his system, which of course was impressive and big. Everyone else there was fine, though - I obviously have a lower pain threshold than most people. [:)]

  • [:)]

    well we all work at calibrated 85 SPL peak, and the favorit volume level is 90dB spl=0VU where judgements are done right.

    Shouldn't hurt. at least not in the right environment

    .

  • well i found a little helper, where you can turn the constant power panning off, and modify the stereo field as i like:

    http://home.netcom.com/~jhewes/StereoPan.html

    Haven't checked with the oscilloscope if it workes properly, but will do that on the next occasion

    Ladies and Gentlemen,
    if this little plugin works in the VSTi effect slot after the sample player, it will revolutionize the way you can manipulate the position of your sounds in the stereo field, right in the VSTi track

    .

  • last edited
    last edited

    @Angelo Clematide said:

    well i found a little helper, where you can turn the constant power panning off, and modify the stereo field as i like:

    http://home.netcom.com/~jhewes/StereoPan.html

    Haven't checked with the oscilloscope if it workes properly, but will do that on the next occasion

    Ladies and Gentlemen,
    if this little plugin works in the VSTi effect slot after the sample player, it will revolutionize the way you can manipulate the position of your sounds in the stereo field, right in the VSTi track

    .

    Angelo, I just started to look for such an application today, so this couldn't have come at a better time. I am using it in Chainer, which is my current VST host. So far it works exactly as the panners do in Nuendo, but I can then link and move around as in GS3, so that now I don't have to have multiple outputs streaming across the network. Hence I can run a full orchestra (or will when it arrives!!!) with a very low latency.

    Thanks again for the link.

    DG

  • DG

    is what you say, that Nuemdo could do that natively on a VSTi track?

    .

  • last edited
    last edited

    @Angelo Clematide said:

    DG

    is what you say, that Nuemdo could do that natively on a VSTi track?

    .

    Yes, although not with as many controls as "your" app. This is what I've been doing, but with VI not being multitimbral it has meant having far too many audio channels streaming over the network to be able to track and playback at low latency. By using Chainer I only need 1 audio channel for 10 instruments. However, this means that it all comes into Nuendo via one stereo track, so using your little app on each instrument in Chainer means that the audio (not MIDI..!) is already panned before it reaches Nuendo.

    DG