Vienna Symphonic Library Forum
Forum Statistics

196,732 users have contributed to 43,031 threads and 258,433 posts.

In the past 24 hours, we have 7 new thread(s), 14 new post(s) and 102 new user(s).

  • last edited
    last edited

    @jbm said:

    Paul and gugliel,

    I completely agree with both your points. Paul: I'm only really looking to get "closer" to the real dynamics... but when you're talking about normalized samples, where a Trombone ff is precisely as loud as a Violin ff, well... Anyway, I hear you.

    J.


    J, I just had a listen to your Harp piece in the Horizon demo section. Yeah - that's interesting and sort of new to me type of music. The sound is surely as real as you would want to get it. Obviously, the more instruments you will use, the more interesting the mixing will become. This business with Trombones and Violins and their respective comparative volumes - most people just put stuff back or forwards in the mix. Altiverb style reverbs allow different positioning etc. Not perfect, but a goodish technique.
    You can tell that I'm the worst technical case on the forum when it comes to all this kind of thing.

    This business about MIR. Does that mean I will be able to just choose a sample and it will be automatically in the right space and the right kind of comparative volume?

  • last edited
    last edited

    @Fred Story said:



    We've dug back into our Adler since returning from New York. And instead of automatically loading the keyswitched patches, I find myself looking for VSL instruments with the most dynamic layers.

    Fred Story


    You too!? [:D]

    Another observation of Scott's ties Fred and William's comments in noting the shift among recording engineers who during the 60-70s wanted to close mic sections to isolate them for greater mix flexibility. As things worked out in the 80s and after, Scott said the preference was/is to go with overheads, and again, rely on solid orchestration to achieve the right balance.

  • It is possible to create balances between the solo instruments and groups prior to any performance that are based on relative dynamic ranges. I like the idea of this, since it provides a base to operate from and then deviate if desired.

    For example, the fact pointed out that a normalized ff trombone sounds in a way "thinner" than a mf trombone is an artifact of the equalization of volume levels. Stand three feet away from the player, and it will not be thin
    at all, it will be deafening. With the normalization you are hearing far more percentage of the higher partials which predominate at the loudest dynamic. So adjusting these is essential obviously and I don't really mean to say it is all subjective. There is an objective basis to it, which can be done fairly straightforwardly in overall volume settings that are never changed as well as using a standard (for yourself) approach to compression, i.e. on woodwinds, basses, violas perhaps. If you then perform TO those initial settings, your performance begins to act more like the actual orchestra. In other words, the woodwinds have to be screaming at the top of their registers to compete at all with the ff brass - if you have these initial volume settings correct in MIDI rather than wrong, and then fixed by huge compensations in the mix. And doing this I don't think is all that difficult.

    In fact, one way to do it rather straightforwardly, if you want a mechanistic approach, would be to "calibrate" the entire ensemble's ff dynamic so that these basic balances are in place within the sequencer. Because the dynamic ranges of each instrument are not wrong WITHIN themselves - they are wrong in RELATIONSHIP to each other. And adjusting those relationships is best done prior to performance, not afterwards, because one can then use traditional (and natural sounding) scoring techniques to achieve balance, instead of huge, artifical compensations in mixing. If, after doing this sort of real-time set up, your woodwinds for example were overbalanced on an important line by the brass, you wouldn't "turn the brass down" or "turn the woodwinds up" but you would play softer dynamic samples in the brass, or adjust the voicing so that it is lighter - exactly as a conductor or orchestrator - not a recording engineer - would do. This is obviously easier said than done, and you will probably still have to fix it in the mix, but it would be nice as a basic operating principle.

    Apparently the MIR project is going to have some of these relationships built into it (?) which would be a huge leap forward.

  • As a relative latecomer to this thread, I have to say that I agree with most of what is being said. When I write, I have a very clear idea in my head as to whether or not the mix will be "real". It may well be that the orchestra part of it is traditional, but that there may be an Erhu or Whistle (for example) overdub. Of course in the Concert Hall the solo instrument would be lost if not voiced correctly, but in a recording situation this can be faked. However, I would consider myself to have failed if the volume had to be increased (or decreased) dramatically for any of the orchestral instruments.

    I am very much of the opinion that I like my orchestrations to sound properly voiced in the session, and have, on occasion, "asked" an engineer to remix a track where some of the supporting instruments had been lifted in dynamic. However, with a lot of commercial sessions I have to be very careful about how I orchestrate. For example, if I have a high, loud horn passage, I sometimes double it with the 3rd trumpet at a much lower dynamic, which adds to the brilliance of the overall horn sound, without being heard as a separate entity. However, in a close mike situation, unfortunately one hears the trumpet from the other side of the room, and the blend is usually not good.

    Of course, sometimes the brass dynamics are overdone in order to give the players the idea of the sort of sound required. In these situations, one has to rely on the experience of the conductor to balance the overall sound. Anyone who has performed any Beethoven Symphonies, for example, will surely have come across the situation where the trumpets are marked fff along with the rest of the orchestra, and have obliterated everything!
    Obviously with modern orchestration this shouldn't happen, but it is always something to be aware of.

    Regarding MIDI balance, I think that where VSL can work extremely well is that just choosing the correct dynamic level patch can often do the MIDI balance for you.

    DG

  • last edited
    last edited

    @DG said:


    Regarding MIDI balance, I think that where VSL can work extremely well is that just choosing the correct dynamic level patch can often do the MIDI balance for you.

    This would assume that when the instruments were recorded at varying dynamics, the volume levels were left unchanged, and that they were not adjusted after the fact. Was this actually the case?

  • William,

    You've got it. That's exactly what I'm after... I just wanted an empirical guideline to help in setting those levels. So, as was pointed out with the Rimsky quotes early in the thread, I could just go by the general principles of orchestral balance and dynamics, extrapolate a little, and set my 'pre mix' volumes accordingly. That would probably do just fine.

    cheers,

    J.

  • Well actually what I was suggesting was a simpler, more numerical approach to the initial settings, and THEN using orchestration and performance for the balance. In other words, attempting to emulate the simple physical facts of the instruments in your initial MIDI and DSP settings, and never varying from that (or at least not very much) in the subsequent performance. This would be somewhat restrictive from a recording engineer's perspective, but would allow a composer to put into play his orchestration to address problems with balance.

    I think one important aspect of this idea is that it is not done after the fact. In other words, it is a self-imposed limitation you deal with in the performance, analogous to the limitations a conductor deals with at a rehearsal.

    But these initial settings would not really be done with principles or adjustments of orchestration. They would be set according to obvious differences in volume levels and could simply be approximations, i.e., woodwind overall volume settings in MIDI at 90, as opposed to 127 for brass, or whatever level you arrive at (and like personally since it will never really be completely objective) in the ff "calibration."

    I have not tried this but the discussion has made me want to experiment with it.

  • Martin,

    If I'm understanding the question properly, the answer is "no". The VSL (and most sample libraries) are all normalized, and thus set the peak amplitude in the file to zero. This means that all dynamics have the same volume on playback. This is generally the best way to make a sample library, as it allows the player, through things like velocity cross-switches or crossfades, to control the dynamic in a consistent and predictable way. This works well for dynamics *within* a single instrument, but what I'm talking about are dynamics (or more accurately, the perceived loudness) *between* different instruments. This is where the normalization is a bit of a drag... There's no perfect solution, but I'm looking for something more true. So far, what William suggests (which is what I've been trying to agree with, though we seem to be just missing one another!), is the best idea--that of creating a "square-one" set of levels, which is really an imbalanced sort of "mix", that better reflects the differences in amplitude of the various instruments, then composing with that mix as a sort of "control". Thus, during composition, all balancing would be done using the dynamic markings in the score, rather than using the faders in a mixer. Actually, this is the way I already work, since I compose directly to score in Finale, and this is probably the main reason why I want the dynamic relationships better reflected in the sampler itself. With a typical Finale setup, you have the dynamics from pppp to ffff "hardwired" to set velocity levels. So, assigning mf to both a flute and a trombone sends velocity 64 to each. In "real life" this should result in the trombone being a fair bit louder than the flute, but in sample land that doesn't happen... at some point you have to do some tweaking! But, just as William described, I want to do that "tweaking" with dynamic markings, hairpins, and so on, on my score... at least to as great a degree as possible.

    Anyway, that's what's happening. And I'm sure everybody's got the idea and are all bored s***less with my gassing on about it! [;)]

    cheers,

    J.