Vienna Symphonic Library Forum
Forum Statistics

193,952 users have contributed to 42,904 threads and 257,885 posts.

In the past 24 hours, we have 5 new thread(s), 21 new post(s) and 74 new user(s).

  • It is a very good subject, and jbacal and others have the beginning of the answer, perhaps: much wider use of dynamic, timbral, and pitch expressiveness. One difficulty I have been having is trying to 'espressivize' a line as a solo, when any live player would be responding to the whole mass of music into which the line might fit. I still don't think that simply eq'ing the whole of a string line makes sense, though modifying individual recorded notes as part of making them expressive might work (but getting us perilously close to the same total length of time involved as the 15 years of violin study)

  • Easy, easy, dear forum members - no hard feelings in this thread. [:)]

    *****

    William, the quest for "the" string sound is probably as old as orchestral recording itself. Leaving the more obvious main factors like the composition, the arrangement, the performance and the acoustics of the surrounding room aside for a moment, I dare to suggest that even in seemingly puristic productions the techniques of sculpturing this sound were always used to their fullest extent; the style of doing so was dependant on the respective era, of course.

    Given the "raw" nature of our samples (in the best sense of the word), many ways of audio-processing make sense. This ranges from simple, static EQing to sophisticated DSP like convolution - the latter not only for reverberation, but also to mimic a certain signal path, or even "dry" resonances.

    Personally, I tend to use a combination of several tools. Most of the time I start with some basic EQing, emphasizing the "body" of each section, while getting rid of the (mostly scale-dependant) resonances that colour the sound too much. The next step ist to tame the "shrillness" (is this a word?) between 2 and 4 kHz. This is _very_ much dependant on the arrangement: Sometimes it is necessary to cut here two or even three small frequency bands by 8 or even 12 dB; on other occasions, this would make the overall impression too dark and indirect, and a broad dip by 3 dB centered around 3.5 kHz is more than enough. More often than not, I add some kind of harmonic distortion, be it from analogue equipment (older tube gear comes to mind) or on behalf of saturation coming from tape-emulations or the like. This blurs the pureness of the harmonics in a good way and adds interest and variation to the sound.

    In pieces with extreme dynamic changes from pp to ff it is sometimes impossible to find one setting for all occasions. In these cases, I like to use dynamic EQs (resp. multiband compressors) that reduce the volume of a certain frequency range only if a certain threshold is reached. This seems to be tricky, but the results are self-explanatory once you try it.

    While it seems to be contradictory to what I just described, I sometimes _add_ some very high treble to violin-sections, for example a soft shelving EQ above 12 kHz or higher. Here, the quality of the EQ is crucial, of course - this is true for virtual orchestration as well as for live recordings.

    The final step is some kind of reverberation, most of the time. Your options range from a very clear, unobtrusive synthetic reverb, just to add that certin feeling of "air", to samples of real halls - rich and full of character. Again, you should treat the reverb as a signal of its own (like an engineer would optimize the ambience-tracks of his recordings). Use EQs and all sonic tools you feel that are necessary (but not more [;)] ...). The crucial point is the balance between dry and wet signal. You may understand that there are hardly any guidelines for this, as it depends totally on the context.

    ****

    I hope this gives you some starting points for your own solutions. - As this thread has turned towards a more technical POV now, I will move it to the "Mixing and PostPro"-section of our forum, where other users will most likely look for this topic first.

    All the best,

    /Dietz - Vienna Symphonic Library
  • I think Gugliel's idea of modifying individual notes is theoretically the answer, but of course we would all go mad if we had to do that. I find it often too laborious to even instert the right crescendo articulations etc.

    I have always noticed that on my recordings the VSL samples sound thinner compared to the live sections - for example my average (live) violins sections are only 5 + 5 and sound still much bigger than VSL's 14+14 or for example I recently recorded a French Horn section with just 3 horns and they sounded huge - compare that with playing chords played the with the VSL Horns Ensemble (4+4+4= theoretically 12 horns! but sound quite small).

    Apart from the 3.5k dip and 12k "air" boost, that Dietz mentions too, the live recordings don't need any postproduction.

    I think ulitmately we could get closer to expressive lines by having more realtime control over vibrato and dynamics (real, not crossfaded) but I wouldn't know how this could be done with samples.

  • Thanks, Dietz, for those ideas. The "raw" aspect of the VSL sounds is what makes them so adaptable and useable.

    To me the most elusive quality of strings is how they will sound far darker than samples much of the time, but still not muffled as in heavy-handed EQ (the kind I usually do).

    Maybe I can apply some of this expertise to a Russian Romantic I happen to have sitting right next to me now.

  • I keep meaning to try and get time in the studio to experiment further on an idea I just started to hatch ages ago.

    Space Designer, or any reverb plug in for that matter, offer a functionality (hate that word) that isn't immediately obvious. By applying the reverb via a regular post-fade aux send on your mixer, real or virtual, there is a large amount of dry signal left to be thick, up front, in your face, over bright, too detailed to sit in a mix etc.

    My idea, that in initial try outs sounded great, is to send the instruments to the reverb using pre-fade aux sends and then lowering the fader levels for the instruments to much lower settings. Because the aux send is pre-fade, the level of the instrument going to reverb is therefore independant to the fader position. By juggling the aux level against the dry channel fader level, you can really "sit" instruments into the mix. I have the luxury of 8 digi outs from Logic Audio and so pair off sections - Strings run through 1&2, woodwind through 3&4, brass and percussion through 5/6 and 7/8 respectively.

    This system depends absolutely on you having a reverb setting that you really like the sound of. There are a couple of rooms/chambers/spaces in Space Designer that I like a lot. It also allows convolution reverbs to be appreciated to the fullest. Rather than adding some kind of a tail to a sound, the character is the reverb itself. Strings sit back and aren't so bright, flute solos come over with a real sense of space and distance. Nothing sounds overly close mic'd, which is my intention a lot of the time when recreating orchestral sounds. Sometimes, a flute is really nice in the same room as the listener, but not when playing prelude a l'apres midi.

    I recommend others to have a play with this. If anyone wants to know how to set up pre-fade sends in Logic audio, then I shall be happy to furnish them with details.

    Great thread, though, and totally central to any possible chance of sampled orchestras sounding convincing.

  • Very true, rawmusic. Working with sampled reverb, I have much more "wet"-signal in my mixes than with synthetic reverb, without getting that dreaded "muddy" result.

    /Dietz - Vienna Symphonic Library
  • Well, in the thin, rarified atmosphere of this VERY quiet corner of the Forum, please let us continue this interesting discussion.

    At first I thought it was a sign of hard-headed practicality as well as good ears that Dietz uses convolution with dry signal, but it is startling that it might even be preferable due to the cut off signal mentioned.

    I was using the Cholakis convolutions in Vegas 6 audio with dry signal, and it seemed to work well, though I then began using gigapulse 100% wet, adjusting the so-called "perspective" because this is "pure."

    However, I really LIKE adjusting dry/wet ratios! A perfect way of envisiining how close the source of sound is, compared to the ambience. So it has been like some kind of punishment not to do that because it is basically WRONG with convolution. I'm very glad to hear that Dietz ("Da Man") has not fully submitted to this punishment.

  • last edited
    last edited

    @Dietz said:

    Very true, rawmusic. Working with sampled reverb, I have much more "wet"-signal in my mixes than with sythetic reverb, without getting that dreaded "muddy" result.


    Do you think this is because of the modulation that is common in most synthetic reverbs?

  • [Aarrggh ... it hurts to see my own typos quoted! [8-)] ... of course I meant "synthetic" reverb. ]

    That's a good question, JJP. I have no final answer to that, but modulation may be one part of the equation. The subtle nuances that discern a purly synthetic instrument form a sampled one, too, may be another factor, that could be valid with sampled reverb as well.

    /Dietz - Vienna Symphonic Library
  • last edited
    last edited

    @Dietz said:

    [Aarrggh ... it hurts to see my own typos quoted! [8-)] ... of course I meant "synthetic" reverb. ]


    I did not mean to amplify your typo. [:D] Truth be told, I read and posted quickly and did not even notice it until you pointed it out. [:O]ops:

  • Dietz,

    Do you think that dynamic range is a problem here? That is something I have been thinking about, because one thing I notice is that when listened to separately, just triggered in isolation, any of the string sounds are full and big. But then, when placed in a mix with other instruments, they often seem to shrink to a thin little line. But these are specific pieces I have heard, and I wonder if a distortion of what actually happens in live recording situations is taking place. If in a live recording microphones are placed which record the full size of a string section, that size is never completely erased by simple loudness of other instruments such as brass in the mix, because of either the physical placement maintaining the size, or the knowledge of the engineer to not let it be covered up. But in samples, it is always being erased or obliterated because that prominence due to size is not respected by the sample user. He simply covers up the size of the sound, and mixes an entire violin section as if it were one little voice among many. When in reality it is a whole sea of voices.

    Does this make any sense? I am trying to understand myself...

  • William,

    Pardon my boldness, but I couldn't help answering for Dietz by relating a personal experience of this very thing.

    For one of my projects a client wanted the strings on a quiet track to sound lighter and thinner without changing the actual notes or the arrangement. I substituted the Chamber Strings for the 1st Ed. strings, re-mixed, and they STILL sounded too broad and "milky."

    My conclusion was to totally rewrite. As it turns out, the arrangement exposed the string lines too much:

    Woods were providing ostinato parts in the form of some repeated staccato figures along with a separate harp ostinato supported by bass pizz doubled by soft timpani, so the entire track would sound too soft if I just pulled down the upper strings, which were providing the main texture of slow perf_leg counterpoint.

    Hope this helps.

    Clark