Vienna Symphonic Library Forum
Forum Statistics

185,306 users have contributed to 42,390 threads and 255,487 posts.

In the past 24 hours, we have 1 new thread(s), 19 new post(s) and 57 new user(s).

  • I think Gugliel's idea of modifying individual notes is theoretically the answer, but of course we would all go mad if we had to do that. I find it often too laborious to even instert the right crescendo articulations etc.

    I have always noticed that on my recordings the VSL samples sound thinner compared to the live sections - for example my average (live) violins sections are only 5 + 5 and sound still much bigger than VSL's 14+14 or for example I recently recorded a French Horn section with just 3 horns and they sounded huge - compare that with playing chords played the with the VSL Horns Ensemble (4+4+4= theoretically 12 horns! but sound quite small).

    Apart from the 3.5k dip and 12k "air" boost, that Dietz mentions too, the live recordings don't need any postproduction.

    I think ulitmately we could get closer to expressive lines by having more realtime control over vibrato and dynamics (real, not crossfaded) but I wouldn't know how this could be done with samples.

  • Thanks, Dietz, for those ideas. The "raw" aspect of the VSL sounds is what makes them so adaptable and useable.

    To me the most elusive quality of strings is how they will sound far darker than samples much of the time, but still not muffled as in heavy-handed EQ (the kind I usually do).

    Maybe I can apply some of this expertise to a Russian Romantic I happen to have sitting right next to me now.

  • I keep meaning to try and get time in the studio to experiment further on an idea I just started to hatch ages ago.

    Space Designer, or any reverb plug in for that matter, offer a functionality (hate that word) that isn't immediately obvious. By applying the reverb via a regular post-fade aux send on your mixer, real or virtual, there is a large amount of dry signal left to be thick, up front, in your face, over bright, too detailed to sit in a mix etc.

    My idea, that in initial try outs sounded great, is to send the instruments to the reverb using pre-fade aux sends and then lowering the fader levels for the instruments to much lower settings. Because the aux send is pre-fade, the level of the instrument going to reverb is therefore independant to the fader position. By juggling the aux level against the dry channel fader level, you can really "sit" instruments into the mix. I have the luxury of 8 digi outs from Logic Audio and so pair off sections - Strings run through 1&2, woodwind through 3&4, brass and percussion through 5/6 and 7/8 respectively.

    This system depends absolutely on you having a reverb setting that you really like the sound of. There are a couple of rooms/chambers/spaces in Space Designer that I like a lot. It also allows convolution reverbs to be appreciated to the fullest. Rather than adding some kind of a tail to a sound, the character is the reverb itself. Strings sit back and aren't so bright, flute solos come over with a real sense of space and distance. Nothing sounds overly close mic'd, which is my intention a lot of the time when recreating orchestral sounds. Sometimes, a flute is really nice in the same room as the listener, but not when playing prelude a l'apres midi.

    I recommend others to have a play with this. If anyone wants to know how to set up pre-fade sends in Logic audio, then I shall be happy to furnish them with details.

    Great thread, though, and totally central to any possible chance of sampled orchestras sounding convincing.

  • Very true, rawmusic. Working with sampled reverb, I have much more "wet"-signal in my mixes than with synthetic reverb, without getting that dreaded "muddy" result.

    /Dietz - Vienna Symphonic Library
  • Well, in the thin, rarified atmosphere of this VERY quiet corner of the Forum, please let us continue this interesting discussion.

    At first I thought it was a sign of hard-headed practicality as well as good ears that Dietz uses convolution with dry signal, but it is startling that it might even be preferable due to the cut off signal mentioned.

    I was using the Cholakis convolutions in Vegas 6 audio with dry signal, and it seemed to work well, though I then began using gigapulse 100% wet, adjusting the so-called "perspective" because this is "pure."

    However, I really LIKE adjusting dry/wet ratios! A perfect way of envisiining how close the source of sound is, compared to the ambience. So it has been like some kind of punishment not to do that because it is basically WRONG with convolution. I'm very glad to hear that Dietz ("Da Man") has not fully submitted to this punishment.

  • last edited
    last edited

    @Dietz said:

    Very true, rawmusic. Working with sampled reverb, I have much more "wet"-signal in my mixes than with sythetic reverb, without getting that dreaded "muddy" result.


    Do you think this is because of the modulation that is common in most synthetic reverbs?

  • [Aarrggh ... it hurts to see my own typos quoted! [8-)] ... of course I meant "synthetic" reverb. ]

    That's a good question, JJP. I have no final answer to that, but modulation may be one part of the equation. The subtle nuances that discern a purly synthetic instrument form a sampled one, too, may be another factor, that could be valid with sampled reverb as well.

    /Dietz - Vienna Symphonic Library
  • last edited
    last edited

    @Dietz said:

    [Aarrggh ... it hurts to see my own typos quoted! [8-)] ... of course I meant "synthetic" reverb. ]


    I did not mean to amplify your typo. [:D] Truth be told, I read and posted quickly and did not even notice it until you pointed it out. [:O]ops:

  • Dietz,

    Do you think that dynamic range is a problem here? That is something I have been thinking about, because one thing I notice is that when listened to separately, just triggered in isolation, any of the string sounds are full and big. But then, when placed in a mix with other instruments, they often seem to shrink to a thin little line. But these are specific pieces I have heard, and I wonder if a distortion of what actually happens in live recording situations is taking place. If in a live recording microphones are placed which record the full size of a string section, that size is never completely erased by simple loudness of other instruments such as brass in the mix, because of either the physical placement maintaining the size, or the knowledge of the engineer to not let it be covered up. But in samples, it is always being erased or obliterated because that prominence due to size is not respected by the sample user. He simply covers up the size of the sound, and mixes an entire violin section as if it were one little voice among many. When in reality it is a whole sea of voices.

    Does this make any sense? I am trying to understand myself...

  • William,

    Pardon my boldness, but I couldn't help answering for Dietz by relating a personal experience of this very thing.

    For one of my projects a client wanted the strings on a quiet track to sound lighter and thinner without changing the actual notes or the arrangement. I substituted the Chamber Strings for the 1st Ed. strings, re-mixed, and they STILL sounded too broad and "milky."

    My conclusion was to totally rewrite. As it turns out, the arrangement exposed the string lines too much:

    Woods were providing ostinato parts in the form of some repeated staccato figures along with a separate harp ostinato supported by bass pizz doubled by soft timpani, so the entire track would sound too soft if I just pulled down the upper strings, which were providing the main texture of slow perf_leg counterpoint.

    Hope this helps.

    Clark