Vienna Symphonic Library Forum
Forum Statistics

196,732 users have contributed to 43,031 threads and 258,433 posts.

In the past 24 hours, we have 7 new thread(s), 14 new post(s) and 102 new user(s).

  • Yup. True, true.

    I also agree with William on the subject of creating works specifically *for* the samples. That's a different matter, since it's only about getting the balance you want, regardless of what might happen in performance. However, my work with VSL is about 60-40 in favor of creating pieces that are intended for live performance, in a concert situation. These are generally smaller groups, and generally lack rehearsal time, etc... So, I'd like the benefit of a "true" dynamic pallette as a means of reducing the number of steps in realizing a new work.

    It's strange, but I think I probably had an easier time getting dynamic balance *before* I ever used samples for composition. Back then, I just went by my understanding of the general rules of orchestration, and my ability to auralize, with regard to balance, and it generally went quite well. But once you've tasted the apple, it's very hard to go back! I love VSL, but this step would seal the bond!

    I actually wonder if this will be part of the bigger MIR project, and the whole secret progress going on behind the scenes with VSL? The ideal, it seems to me, would be to have an orchestra of virtual instruments, placed in a perfectly modeled virtual space which would, by definition, require a perfect simulation of dynamic relationships as well... maybe? One can always dream! We certainly all know, by now, that Herb is very much committed to "the ideal".

    cheers,

    J.

  • last edited
    last edited

    @Fred Story said:


    I like William's ideas, which if I read them correctly, essentially say - forget all that. The sampled orchestra is a beast unto itself...capable of creating stuff we probably couldn't with a live orchestra.

    Fred Story


    Hello lads! Mind if I sit down with you on this one. What are you drinking? [:D]

    That's it exactly Fred.

    I actually got told off the other day by Leon. [[:D]] He suggested to me that my template was wrong and that a real ochestra wouldn't sound like that. Hahhaa! I wish he would come on on this thread actually, because his ideas on ochestral sound and the examples on his site are really excellent. He's a good lad. DG would also be very useful to this discussion, given his conducting experience -something I personally would never try. [:O]ops: - again.

    But, I don't subscribe necessarily to that way of thinking. I can't understand, given the amout of power now available to us, why we want this so-called real sound with a sample library like VSL. I just don't get it.

    I sympathize with J and others who feel it's a pre-requisite to get this real orchestral sound and ambience -if that's what they want. In other words, J wants to hear it as real as possible, and then record it with live players. EVEN if that is accomplished, I guarantee, guarantee - it'll still sound completely different when you get the live players in. Bound to. For all sorts of different reasons. And anyway, where's the excitement in that. I mean, what are you going to do? Play the sampled version to a group of live players first and say 'I want it to sound like that'.

    But I do understand, so this is not in any way meant to be a contradiction or argumentative just for the sake of it. Also, when live players are in session, you could use so many different takes -and then in post production do all sorts of tweaking. Add reverb here, EQ there and on and on.

    We're nearly there now, but there will come a time in a few years or so, where most writers just aren't going to bother with real players a lot of the time, especially if there's dialogue or special effects over the top. The main problems at the moment for me anyway with samples, is generally the solo stringed instruments -but that will probably change in time.

    Say you want to put non-orchestral pieces in an orchestral piece -like a synthesizer? What does Rimski say about that in his book? [[:D]]

    Much more fun to make up your own rules, I would say.

    PR

  • One more miscellaneous thought: just as, for an instrument, a single pitch correction does little good, a single dynamics measurement would not do much to get you (jbm) to your goal. For instance, consider a flute: in the mid-treble clef area, it can't go much beyond a piano or at most mp, but a very high B or C# can't be played at anything under an ff, while a very high Bflat or A can be played from mf to almost ff. So, you'd need the samples to reflect that (unless you gain experience as an orchestrator with real instruments). And as someone said above, the whole process of recording really does squash the dynamic range anyway.

  • last edited
    last edited

    @jbm said:

    [...]
    I actually wonder if this will be part of the bigger MIR project, and the whole secret progress going on behind the scenes with VSL? The ideal, it seems to me, would be to have an orchestra of virtual instruments, placed in a perfectly modeled virtual space which would, by definition, require a perfect simulation of dynamic relationships as well... maybe? [...]


    That's no secret. The answer in a nutshell: Yes, what you ask for is exactly what we're after.

    /Dietz - Vienna Symphonic Library

    /Dietz - Vienna Symphonic Library
  • Paul and gugliel,

    I completely agree with both your points. Paul: I'm only really looking to get "closer" to the real dynamics... but when you're talking about normalized samples, where a Trombone ff is precisely as loud as a Violin ff, well... Anyway, I hear you.

    gugliel. I realize that this is probably the main reason I'm not finding the table I'm looking for -- the scientific community generally points out pretty quickly the problems you're talking about. However, I would be happy with the outer extremes of pp and ff. The dynamic curve I can deal with... (BTW, this is probably also something VSL has up their sleeves!) As I mentioned to Paul above, I'm not looking for absolute "truth", just a closer estimation.

    Dietz: Release it!!! AAAAck! I can't stand the suspense. I know something big is going on. I keep picking up little hints about something big... maybe that's just me dreaming... but I know these things are all technologically possible. They've just been waiting for someone with the drive (and $$$ support) to go ahead and do it! I'll sell my car, and sublet my apartment, then move into a cardboard box (located close to a puplic AC outlet) with my computers and a blanket. [...] Well, maybe not... quite. [;)]

    J.

  • last edited
    last edited

    @jbm said:

    Paul and gugliel,

    I completely agree with both your points. Paul: I'm only really looking to get "closer" to the real dynamics... but when you're talking about normalized samples, where a Trombone ff is precisely as loud as a Violin ff, well... Anyway, I hear you.

    J.


    J, I just had a listen to your Harp piece in the Horizon demo section. Yeah - that's interesting and sort of new to me type of music. The sound is surely as real as you would want to get it. Obviously, the more instruments you will use, the more interesting the mixing will become. This business with Trombones and Violins and their respective comparative volumes - most people just put stuff back or forwards in the mix. Altiverb style reverbs allow different positioning etc. Not perfect, but a goodish technique.
    You can tell that I'm the worst technical case on the forum when it comes to all this kind of thing.

    This business about MIR. Does that mean I will be able to just choose a sample and it will be automatically in the right space and the right kind of comparative volume?

  • last edited
    last edited

    @Fred Story said:



    We've dug back into our Adler since returning from New York. And instead of automatically loading the keyswitched patches, I find myself looking for VSL instruments with the most dynamic layers.

    Fred Story


    You too!? [:D]

    Another observation of Scott's ties Fred and William's comments in noting the shift among recording engineers who during the 60-70s wanted to close mic sections to isolate them for greater mix flexibility. As things worked out in the 80s and after, Scott said the preference was/is to go with overheads, and again, rely on solid orchestration to achieve the right balance.

  • It is possible to create balances between the solo instruments and groups prior to any performance that are based on relative dynamic ranges. I like the idea of this, since it provides a base to operate from and then deviate if desired.

    For example, the fact pointed out that a normalized ff trombone sounds in a way "thinner" than a mf trombone is an artifact of the equalization of volume levels. Stand three feet away from the player, and it will not be thin
    at all, it will be deafening. With the normalization you are hearing far more percentage of the higher partials which predominate at the loudest dynamic. So adjusting these is essential obviously and I don't really mean to say it is all subjective. There is an objective basis to it, which can be done fairly straightforwardly in overall volume settings that are never changed as well as using a standard (for yourself) approach to compression, i.e. on woodwinds, basses, violas perhaps. If you then perform TO those initial settings, your performance begins to act more like the actual orchestra. In other words, the woodwinds have to be screaming at the top of their registers to compete at all with the ff brass - if you have these initial volume settings correct in MIDI rather than wrong, and then fixed by huge compensations in the mix. And doing this I don't think is all that difficult.

    In fact, one way to do it rather straightforwardly, if you want a mechanistic approach, would be to "calibrate" the entire ensemble's ff dynamic so that these basic balances are in place within the sequencer. Because the dynamic ranges of each instrument are not wrong WITHIN themselves - they are wrong in RELATIONSHIP to each other. And adjusting those relationships is best done prior to performance, not afterwards, because one can then use traditional (and natural sounding) scoring techniques to achieve balance, instead of huge, artifical compensations in mixing. If, after doing this sort of real-time set up, your woodwinds for example were overbalanced on an important line by the brass, you wouldn't "turn the brass down" or "turn the woodwinds up" but you would play softer dynamic samples in the brass, or adjust the voicing so that it is lighter - exactly as a conductor or orchestrator - not a recording engineer - would do. This is obviously easier said than done, and you will probably still have to fix it in the mix, but it would be nice as a basic operating principle.

    Apparently the MIR project is going to have some of these relationships built into it (?) which would be a huge leap forward.

  • As a relative latecomer to this thread, I have to say that I agree with most of what is being said. When I write, I have a very clear idea in my head as to whether or not the mix will be "real". It may well be that the orchestra part of it is traditional, but that there may be an Erhu or Whistle (for example) overdub. Of course in the Concert Hall the solo instrument would be lost if not voiced correctly, but in a recording situation this can be faked. However, I would consider myself to have failed if the volume had to be increased (or decreased) dramatically for any of the orchestral instruments.

    I am very much of the opinion that I like my orchestrations to sound properly voiced in the session, and have, on occasion, "asked" an engineer to remix a track where some of the supporting instruments had been lifted in dynamic. However, with a lot of commercial sessions I have to be very careful about how I orchestrate. For example, if I have a high, loud horn passage, I sometimes double it with the 3rd trumpet at a much lower dynamic, which adds to the brilliance of the overall horn sound, without being heard as a separate entity. However, in a close mike situation, unfortunately one hears the trumpet from the other side of the room, and the blend is usually not good.

    Of course, sometimes the brass dynamics are overdone in order to give the players the idea of the sort of sound required. In these situations, one has to rely on the experience of the conductor to balance the overall sound. Anyone who has performed any Beethoven Symphonies, for example, will surely have come across the situation where the trumpets are marked fff along with the rest of the orchestra, and have obliterated everything!
    Obviously with modern orchestration this shouldn't happen, but it is always something to be aware of.

    Regarding MIDI balance, I think that where VSL can work extremely well is that just choosing the correct dynamic level patch can often do the MIDI balance for you.

    DG

  • last edited
    last edited

    @DG said:


    Regarding MIDI balance, I think that where VSL can work extremely well is that just choosing the correct dynamic level patch can often do the MIDI balance for you.

    This would assume that when the instruments were recorded at varying dynamics, the volume levels were left unchanged, and that they were not adjusted after the fact. Was this actually the case?

  • William,

    You've got it. That's exactly what I'm after... I just wanted an empirical guideline to help in setting those levels. So, as was pointed out with the Rimsky quotes early in the thread, I could just go by the general principles of orchestral balance and dynamics, extrapolate a little, and set my 'pre mix' volumes accordingly. That would probably do just fine.

    cheers,

    J.

  • Well actually what I was suggesting was a simpler, more numerical approach to the initial settings, and THEN using orchestration and performance for the balance. In other words, attempting to emulate the simple physical facts of the instruments in your initial MIDI and DSP settings, and never varying from that (or at least not very much) in the subsequent performance. This would be somewhat restrictive from a recording engineer's perspective, but would allow a composer to put into play his orchestration to address problems with balance.

    I think one important aspect of this idea is that it is not done after the fact. In other words, it is a self-imposed limitation you deal with in the performance, analogous to the limitations a conductor deals with at a rehearsal.

    But these initial settings would not really be done with principles or adjustments of orchestration. They would be set according to obvious differences in volume levels and could simply be approximations, i.e., woodwind overall volume settings in MIDI at 90, as opposed to 127 for brass, or whatever level you arrive at (and like personally since it will never really be completely objective) in the ff "calibration."

    I have not tried this but the discussion has made me want to experiment with it.

  • Martin,

    If I'm understanding the question properly, the answer is "no". The VSL (and most sample libraries) are all normalized, and thus set the peak amplitude in the file to zero. This means that all dynamics have the same volume on playback. This is generally the best way to make a sample library, as it allows the player, through things like velocity cross-switches or crossfades, to control the dynamic in a consistent and predictable way. This works well for dynamics *within* a single instrument, but what I'm talking about are dynamics (or more accurately, the perceived loudness) *between* different instruments. This is where the normalization is a bit of a drag... There's no perfect solution, but I'm looking for something more true. So far, what William suggests (which is what I've been trying to agree with, though we seem to be just missing one another!), is the best idea--that of creating a "square-one" set of levels, which is really an imbalanced sort of "mix", that better reflects the differences in amplitude of the various instruments, then composing with that mix as a sort of "control". Thus, during composition, all balancing would be done using the dynamic markings in the score, rather than using the faders in a mixer. Actually, this is the way I already work, since I compose directly to score in Finale, and this is probably the main reason why I want the dynamic relationships better reflected in the sampler itself. With a typical Finale setup, you have the dynamics from pppp to ffff "hardwired" to set velocity levels. So, assigning mf to both a flute and a trombone sends velocity 64 to each. In "real life" this should result in the trombone being a fair bit louder than the flute, but in sample land that doesn't happen... at some point you have to do some tweaking! But, just as William described, I want to do that "tweaking" with dynamic markings, hairpins, and so on, on my score... at least to as great a degree as possible.

    Anyway, that's what's happening. And I'm sure everybody's got the idea and are all bored s***less with my gassing on about it! [;)]

    cheers,

    J.