Vienna Symphonic Library Forum
Forum Statistics

196,728 users have contributed to 43,031 threads and 258,433 posts.

In the past 24 hours, we have 7 new thread(s), 14 new post(s) and 102 new user(s).

  • Hey gugliel,

    You posted while I was still typing!

    I've got the Rimksy book as well... I guess I could go through and, considering "double" to be a difference of around 3 dB, I could sort of "fake" it together. Still, a comprehensive list would be really great.

    J.

  • Laurent,

    ...ooops, I somehow missed your post as well. Wow! People really DO care about this topic!

    That's exactly what I'm getting at... It seems, however, that part of the unwritten story here is that you'd pretty much have to be working in a notation program, like Finale or Sibelius, to see it in this light (that is, you'd have to be dealing with "pp" and "ff"). If you're creating your piece in Logic or DP, or something like that, then you just follow your ears. However, I still think there are good reasons to "warp" the dynamic curves of the instruments... I just want to get my brass and percussion to jump out the way they do live, or in many of the concert music recordings I have.

    I also think this applies even more accutely to chamber music than orchestra music... I could be wrong, but it seems that balance issues regarding the upper dynamics are harder to deal with in sample-based realizations of chamber groups. In orchestra pieces it seems to be the other way -- making a solo violin appropriately quiet tends to be more of a challenge...

    J.

  • The only thing to remember is that recorded dynamic ranges are much narrower than what you hear in live ensembles.



    [edited - I must have been going a different direction with the previous sentence and forget to change it!]

  • Yeah, Nick... That's true. But most of my recordings still give a good "kick" when the brass and percussion hit fff! Mostly, I'm just trying to get them to jump out more (even when I don't necessarily want them to), to add a little more realism to my composition setup.

    Have you noticed, BTW, that the Trombone seems a little thin at the louder dynamics? I suspect that this is due to the exact same issue that this whole thread is about; that the ff Trombone is actually *much* louder than most of the other instruments, and that the difference (in dB) between pp and ff produces a much more dramatic effect on the harmonic spectrum (this is true of all brass). When normailized, the subjective effect of the ff sample, next to the pp, is not so much of it being louder, but simply being "thinner", since the stronger upper partials associated with louder sound production become the threshold for normalization, thus effectively "pushing down" the lower partials... What I'd like to do is to find a way of making a more "true" curve (and relationship to the other instruments) to help reduce this effect. It's probably close to impossible... but I'd like to try.

    J.

  • I've thought about this a lot and it is one of the most important topics. Unfortunately, almost no sample users ever do anything about it except "fix it in the mix" (if they do even that which they often don't), It is in sampling that the most grotesque and ridiculous distortions of dynamic range occur.

    An example - a solo flute being louder than a trumpet section. This is nonexistent in reality, but no big deal to the average sample user.

    I strongly feel that the person doing an orchestral work with samples should use as his guide the philosophy of the great recording engineers of the 1950s, whose goal was to create a single mike placement that would capture every instrument perfectly. This necessarily entailed allowing for huge differences in dynamic range, with woodwinds being almost nonexistent and brass and percussion dominating.

    The loudest instrument in decibels in the orchestra is the timpani. The other percussion are the next loudest, and then the brass. All others are tiny in actual acoustic output compared to those. So that if a composer using samples makes his woodwinds match the brass and percussion, he is utterly distorting the acoustic reality as soon as any dynamic above mp is heard. This is similar to close miking of weaker instruments.

    One can gain a basic appreciation of the relative dynamic ranges of the various instrumental groups from the traditional sizes found in various orchestral sections. The reason there are dozens of strings is to attempt a match for about four brass instruments. As soon as you add the huge numbers of brass in the Romantic Era orchestra, you are talking about an even greater discrepency.

    Another clue to these dynamic ranges can be heard in some great works of orchestral music. Perhaps the most illustrative are the symphonies of Bruckner. His music shows an enormous dynamic range, constantly contrasted in the most extreme way. And the woodwinds essentially drop out when matched against the large brass sections with timpani. The strings are barely audible as well.

    So it is a matter of how much you want to contradict the basic fact that the percussion can obliterate all other sections put together, and the brass can easily dominate to the point of inaudibility the woodwinds, and near-inaudibility the strings. If you use dynamics that involve the actual physical capabilites, as opposed to the judicious use of those by a conductor, you are talking about very simple relationships. In a pp all instruments are equal. In an ff only percussion and brass exist. However, we are used to many years of recordings, that nullify those relationships. Play in or conduct an orchestra for a while and you will be reacquainted with them.

    So there is no hard and fast answer. It depends ultimately on how you wish to represent the sound in your mix.

  • While I understand your points, William, my goal is to develop my composition setup into as close a representation as possible of the acoustic reality of an ensemble playing the dynamic markings directly from the score. This is mostly due to the fact that I *do* work directly to score, and don't want to spend time "mixing" while I'm composing -- or rather, that I want to do the "mix" with the dynamic markings themselves (just like in the old days!). What I'm interested in is the sort of "feedback" my system provides for me while I'm composing. For me, that's what the VSL is all about -- making it seem as though I'm composing music, giving it to a group of musicians sat in my studio, and hearing it played back -- with zero hours of rehearsal! And besides, as current sampler technology functions, there's essential NO relationship between musical dynamics and perceived loudness. All samples are the same amplitude, so the old rules of orchestration no longer function, and that's precisely the problem.

    So, it's not that I don't understand the dynamic relationships of the different orchestral groups (though I admit that this takes an awfully long time to *really* learn), but simply that I want to adjust my sampler(s) to better represent the natural acoustic balance. Obviously, normalized samples do not do this. If the Holy Grail of orchestral sample libraries is pure realism, then I think this should extend to the working enviroment, not just the final "mixed" product. No one can convince me that current sampler technology really achieves this, and it's in the dynamic relationships between instruments that the most glaring problems can be found.

    So, as I said, I'll keeping tweaking away, until I find a trick that really works (or at least comes closer to working than conventional sample playback).

    J.

  • I hate to keep sounding like Scott Smalley's PR person, but he spent a fair amount of time on this subject in his film orchestration seminar. In fact the Rimsky-Korsakov formula came up, and Scott's opinion was that - for contemporary film music at least - it doesn't necessarily apply.

    He takes great pride in the fact that the balance - the "mix" - is accomplished through his orchestrations. (Pretty much what William says.) We went through numerous examples of doublings that instead of adding "weight" actually soften a part; how the louder instruments (trumpets especially), can have incredible power playing a unison line, then immediately soften by splitting into harmony as another section needs to be prominent. He even said something like (and I'm paraphrasing here), "All the faders on the mixer in my MIDI studio are pushed to 11. I don't really mix there...I do it through the balance in the orchestrations."

    Going beyond the basics of creating dynamics, we listened to examples of parts that were written to be powerful, but got covered in the actual performance through poor orchestraion. Then we explored how they could stand out more by re-voicing them in various ways. Scott even takes time to sequence some of the alternatives so we could hear the difference.

    It takes a good understanding of the mechanics of each instrument as well. For example, it's harder to play high parts quietly. And on most instruments, difficult to play lower parts loudly. So which range of the instrument you're writing for has a big impact. It's something you can't cover with a fader move in a live performance. (Scott joked that he can always tell an inexperience orchestrator when he sees an outrageously high note...marked pianissimo.)

    It helped me a lot, and I'm already utilizing some of the techniques. I feel like I'm at the tip of the iceberg! (We've dug back into our Adler since returning from New York. And instead of automatically loading the keyswitched patches, I find myself looking for VSL instruments with the most dynamic layers.) In the past I've done so much of my orchestration on more of an intuitive basis...which turns out okay most of the time. But it TAKES a lot of time. Thinking of these ideas and concepts are already getting me where I want to be a lot quicker.

    Maybe I'll get really good at them someday. [:D]

    Fred Story

  • hi jbm,
    thanks for bringing this up. i also consider this a very important point.
    i think the standard orchestra instrumentation has emerged in centuries simply because it is balanced both from point of view of the sounds and their dynamics. and this - and in my opinion not the fact that it is created by "natural" instruments - is the main characteristic that discriminates it from any other instrument set. this inherent balance makes it much simpler to accomplish a good sounding mix.
    contrary, in the process of sampling sounds it is surely always desired to get the best sound quality by normalized samples. thereby all sounds have the same level and the balance is lost and has to be restored in the final mix. unfortunately. this requires a lot of knowledge and experience which people who are no sound engineers - like me - usually don't have (and try and error is not a good strategy when there are of the order of 20 different levels to control in a tutti [:)]). therefore, i also would love to have such programs with "hardwired" dynamics, you talked about - or at least a table or some curves that enable me to create them myself.
    the sound engineers in vienna who did the actual recordings should have all this information - would be great if they could post them (in particular also for the dynamics ranges that have not been sampled).
    kai

  • William,

    I'm not sure why everything I post becomes contentious for you. I was saying "old rules" in the most favorable and affectionate way!

    Somehow you completely missed the point of my last message (and actually, of the original post...). The history of orchestration ties dynamic markings to the perceived loudness of orchestral instruments and groups, and demonstrates a practice of achieving balance in spite of the fact that the instruments themselves inherently lack such balance. The traditional practice of orchestration -- which I obviously *support* because it is essentially irrefutable -- is all about dealing gracefully with imbalance. Samples, on the other hand, flatten out all dynamic differences. This means that every dynamic marking I type into Finale, based on my understanding of traditional orchestration, is reflected as an identical loudness for all instruments in my sampler. That, to me, is a little annoying. So, I'm searching for hard data on instrumental amplitudes (dB, sones, phons, whatever) that can help me create a sort of "pre mix" that reflects the *imbalances* between the instruments/groups *before* I enter any dynamics. Is that any clearer? Please try not to leap into a fury every time I use the words "old" or "romantic"...

    Fred. Thanks for the story -- strange that you're so good at telling them! [;)]
    I appreciate your input, and I certainly am under no illusions regarding my ability as an orchestrator. However, I really am looking for basic information in order to get a more true balance from my "unmixed" setup. One way to think of what I'm after is to imagine a software mixer with all the faders at zero, yet the balance of instruments on playback (with no dynamic markings) still reflects the different natural levels of each instrumental type. If this were the case, then brass and percussion would be naturally louder, and I'd adjust my dynamic markings appropriately. Make sense? Yes, I can auralize the proper balance. I can base it on examples from the repertoire, but why not take full advantage of the remarkable realism of the VSL in this respect as well? Perhaps this doesn't mean anything to anybody else... I just think it would be great to have a sampler that actually made a Trombone much louder than a solo violin, without having to subjectively adjust the level to a point that 'seemed' accurate. And then be able to print off my scores with the same dynamics I used when composing with VSL, rather than always having to go through and adjust the dynamics to get a more sensible version for rehearsal. And please keep in mind that the majority of my composing is for chamber ensembles, which very seldom have pairs of instruments available for dynamic support.

    J.

  • This is an interesting and worthwhile conversation...how we translate whatever real world composing and/or orchestration skills we have into the 'faux' world of samples.

    I like William's ideas, which if I read them correctly, essentially say - forget all that. The sampled orchestra is a beast unto itself...capable of creating stuff we probably couldn't with a live orchestra.

    On the other hand, the real world of music for hire - or our own tastes - often ask for that time-honored live orchestral sound, which VSL has brought us so much closer to. How we can accomplish that better and faster is ALWAYS worth discussing.

    There are some pretty smart folks that hang out here. I've picked up new ways of thinking about things. What could be cooler than that?

    Fred Story

  • Yup. True, true.

    I also agree with William on the subject of creating works specifically *for* the samples. That's a different matter, since it's only about getting the balance you want, regardless of what might happen in performance. However, my work with VSL is about 60-40 in favor of creating pieces that are intended for live performance, in a concert situation. These are generally smaller groups, and generally lack rehearsal time, etc... So, I'd like the benefit of a "true" dynamic pallette as a means of reducing the number of steps in realizing a new work.

    It's strange, but I think I probably had an easier time getting dynamic balance *before* I ever used samples for composition. Back then, I just went by my understanding of the general rules of orchestration, and my ability to auralize, with regard to balance, and it generally went quite well. But once you've tasted the apple, it's very hard to go back! I love VSL, but this step would seal the bond!

    I actually wonder if this will be part of the bigger MIR project, and the whole secret progress going on behind the scenes with VSL? The ideal, it seems to me, would be to have an orchestra of virtual instruments, placed in a perfectly modeled virtual space which would, by definition, require a perfect simulation of dynamic relationships as well... maybe? One can always dream! We certainly all know, by now, that Herb is very much committed to "the ideal".

    cheers,

    J.

  • last edited
    last edited

    @Fred Story said:


    I like William's ideas, which if I read them correctly, essentially say - forget all that. The sampled orchestra is a beast unto itself...capable of creating stuff we probably couldn't with a live orchestra.

    Fred Story


    Hello lads! Mind if I sit down with you on this one. What are you drinking? [:D]

    That's it exactly Fred.

    I actually got told off the other day by Leon. [[:D]] He suggested to me that my template was wrong and that a real ochestra wouldn't sound like that. Hahhaa! I wish he would come on on this thread actually, because his ideas on ochestral sound and the examples on his site are really excellent. He's a good lad. DG would also be very useful to this discussion, given his conducting experience -something I personally would never try. [:O]ops: - again.

    But, I don't subscribe necessarily to that way of thinking. I can't understand, given the amout of power now available to us, why we want this so-called real sound with a sample library like VSL. I just don't get it.

    I sympathize with J and others who feel it's a pre-requisite to get this real orchestral sound and ambience -if that's what they want. In other words, J wants to hear it as real as possible, and then record it with live players. EVEN if that is accomplished, I guarantee, guarantee - it'll still sound completely different when you get the live players in. Bound to. For all sorts of different reasons. And anyway, where's the excitement in that. I mean, what are you going to do? Play the sampled version to a group of live players first and say 'I want it to sound like that'.

    But I do understand, so this is not in any way meant to be a contradiction or argumentative just for the sake of it. Also, when live players are in session, you could use so many different takes -and then in post production do all sorts of tweaking. Add reverb here, EQ there and on and on.

    We're nearly there now, but there will come a time in a few years or so, where most writers just aren't going to bother with real players a lot of the time, especially if there's dialogue or special effects over the top. The main problems at the moment for me anyway with samples, is generally the solo stringed instruments -but that will probably change in time.

    Say you want to put non-orchestral pieces in an orchestral piece -like a synthesizer? What does Rimski say about that in his book? [[:D]]

    Much more fun to make up your own rules, I would say.

    PR

  • One more miscellaneous thought: just as, for an instrument, a single pitch correction does little good, a single dynamics measurement would not do much to get you (jbm) to your goal. For instance, consider a flute: in the mid-treble clef area, it can't go much beyond a piano or at most mp, but a very high B or C# can't be played at anything under an ff, while a very high Bflat or A can be played from mf to almost ff. So, you'd need the samples to reflect that (unless you gain experience as an orchestrator with real instruments). And as someone said above, the whole process of recording really does squash the dynamic range anyway.

  • last edited
    last edited

    @jbm said:

    [...]
    I actually wonder if this will be part of the bigger MIR project, and the whole secret progress going on behind the scenes with VSL? The ideal, it seems to me, would be to have an orchestra of virtual instruments, placed in a perfectly modeled virtual space which would, by definition, require a perfect simulation of dynamic relationships as well... maybe? [...]


    That's no secret. The answer in a nutshell: Yes, what you ask for is exactly what we're after.

    /Dietz - Vienna Symphonic Library

    /Dietz - Vienna Symphonic Library
  • Paul and gugliel,

    I completely agree with both your points. Paul: I'm only really looking to get "closer" to the real dynamics... but when you're talking about normalized samples, where a Trombone ff is precisely as loud as a Violin ff, well... Anyway, I hear you.

    gugliel. I realize that this is probably the main reason I'm not finding the table I'm looking for -- the scientific community generally points out pretty quickly the problems you're talking about. However, I would be happy with the outer extremes of pp and ff. The dynamic curve I can deal with... (BTW, this is probably also something VSL has up their sleeves!) As I mentioned to Paul above, I'm not looking for absolute "truth", just a closer estimation.

    Dietz: Release it!!! AAAAck! I can't stand the suspense. I know something big is going on. I keep picking up little hints about something big... maybe that's just me dreaming... but I know these things are all technologically possible. They've just been waiting for someone with the drive (and $$$ support) to go ahead and do it! I'll sell my car, and sublet my apartment, then move into a cardboard box (located close to a puplic AC outlet) with my computers and a blanket. [...] Well, maybe not... quite. [;)]

    J.

  • last edited
    last edited

    @jbm said:

    Paul and gugliel,

    I completely agree with both your points. Paul: I'm only really looking to get "closer" to the real dynamics... but when you're talking about normalized samples, where a Trombone ff is precisely as loud as a Violin ff, well... Anyway, I hear you.

    J.


    J, I just had a listen to your Harp piece in the Horizon demo section. Yeah - that's interesting and sort of new to me type of music. The sound is surely as real as you would want to get it. Obviously, the more instruments you will use, the more interesting the mixing will become. This business with Trombones and Violins and their respective comparative volumes - most people just put stuff back or forwards in the mix. Altiverb style reverbs allow different positioning etc. Not perfect, but a goodish technique.
    You can tell that I'm the worst technical case on the forum when it comes to all this kind of thing.

    This business about MIR. Does that mean I will be able to just choose a sample and it will be automatically in the right space and the right kind of comparative volume?

  • last edited
    last edited

    @Fred Story said:



    We've dug back into our Adler since returning from New York. And instead of automatically loading the keyswitched patches, I find myself looking for VSL instruments with the most dynamic layers.

    Fred Story


    You too!? [:D]

    Another observation of Scott's ties Fred and William's comments in noting the shift among recording engineers who during the 60-70s wanted to close mic sections to isolate them for greater mix flexibility. As things worked out in the 80s and after, Scott said the preference was/is to go with overheads, and again, rely on solid orchestration to achieve the right balance.

  • It is possible to create balances between the solo instruments and groups prior to any performance that are based on relative dynamic ranges. I like the idea of this, since it provides a base to operate from and then deviate if desired.

    For example, the fact pointed out that a normalized ff trombone sounds in a way "thinner" than a mf trombone is an artifact of the equalization of volume levels. Stand three feet away from the player, and it will not be thin
    at all, it will be deafening. With the normalization you are hearing far more percentage of the higher partials which predominate at the loudest dynamic. So adjusting these is essential obviously and I don't really mean to say it is all subjective. There is an objective basis to it, which can be done fairly straightforwardly in overall volume settings that are never changed as well as using a standard (for yourself) approach to compression, i.e. on woodwinds, basses, violas perhaps. If you then perform TO those initial settings, your performance begins to act more like the actual orchestra. In other words, the woodwinds have to be screaming at the top of their registers to compete at all with the ff brass - if you have these initial volume settings correct in MIDI rather than wrong, and then fixed by huge compensations in the mix. And doing this I don't think is all that difficult.

    In fact, one way to do it rather straightforwardly, if you want a mechanistic approach, would be to "calibrate" the entire ensemble's ff dynamic so that these basic balances are in place within the sequencer. Because the dynamic ranges of each instrument are not wrong WITHIN themselves - they are wrong in RELATIONSHIP to each other. And adjusting those relationships is best done prior to performance, not afterwards, because one can then use traditional (and natural sounding) scoring techniques to achieve balance, instead of huge, artifical compensations in mixing. If, after doing this sort of real-time set up, your woodwinds for example were overbalanced on an important line by the brass, you wouldn't "turn the brass down" or "turn the woodwinds up" but you would play softer dynamic samples in the brass, or adjust the voicing so that it is lighter - exactly as a conductor or orchestrator - not a recording engineer - would do. This is obviously easier said than done, and you will probably still have to fix it in the mix, but it would be nice as a basic operating principle.

    Apparently the MIR project is going to have some of these relationships built into it (?) which would be a huge leap forward.

  • As a relative latecomer to this thread, I have to say that I agree with most of what is being said. When I write, I have a very clear idea in my head as to whether or not the mix will be "real". It may well be that the orchestra part of it is traditional, but that there may be an Erhu or Whistle (for example) overdub. Of course in the Concert Hall the solo instrument would be lost if not voiced correctly, but in a recording situation this can be faked. However, I would consider myself to have failed if the volume had to be increased (or decreased) dramatically for any of the orchestral instruments.

    I am very much of the opinion that I like my orchestrations to sound properly voiced in the session, and have, on occasion, "asked" an engineer to remix a track where some of the supporting instruments had been lifted in dynamic. However, with a lot of commercial sessions I have to be very careful about how I orchestrate. For example, if I have a high, loud horn passage, I sometimes double it with the 3rd trumpet at a much lower dynamic, which adds to the brilliance of the overall horn sound, without being heard as a separate entity. However, in a close mike situation, unfortunately one hears the trumpet from the other side of the room, and the blend is usually not good.

    Of course, sometimes the brass dynamics are overdone in order to give the players the idea of the sort of sound required. In these situations, one has to rely on the experience of the conductor to balance the overall sound. Anyone who has performed any Beethoven Symphonies, for example, will surely have come across the situation where the trumpets are marked fff along with the rest of the orchestra, and have obliterated everything!
    Obviously with modern orchestration this shouldn't happen, but it is always something to be aware of.

    Regarding MIDI balance, I think that where VSL can work extremely well is that just choosing the correct dynamic level patch can often do the MIDI balance for you.

    DG

  • last edited
    last edited

    @DG said:


    Regarding MIDI balance, I think that where VSL can work extremely well is that just choosing the correct dynamic level patch can often do the MIDI balance for you.

    This would assume that when the instruments were recorded at varying dynamics, the volume levels were left unchanged, and that they were not adjusted after the fact. Was this actually the case?