Vienna Symphonic Library Forum
Forum Statistics

194,724 users have contributed to 42,932 threads and 258,000 posts.

In the past 24 hours, we have 7 new thread(s), 18 new post(s) and 110 new user(s).

  • Before reading this thread I had made an incorrect assumption, which is:

    I thought samples were never altered in volume after being recorded. After reading Kenneth's response, that seems not to be the case. Unless I am still wrong. Please correct me if I am wrong.


  • last edited
    last edited

    @DG said:

    The other thing to remember is that certain instruments react with their surroundings far more than others, and the convolution IRs cannot reflect this, as they are based on sounds coming out of a speaker. In fact you'll find that even if one was to broadcast a real performance from an anechoic chamber into an acoustic via a speaker, it still wouldn't sound the same as having a player there.

    Very much agree on this. Once you have microphones and speakers it won't be as natural as you'd expect. Once a conductor told me: There is a mixer in an orchestra and he can make or break the performance. And I was thinking "Wow! So it's really not as natural as we'd expect".


  • last edited
    last edited

    @nektarios said:

    Before reading this thread I had made an incorrect assumption, which is:

    I thought samples were never altered in volume after being recorded. After reading Kenneth's response, that seems not to be the case. Unless I am still wrong. Please correct me if I am wrong.

    All samples are likely to be altered in volume a small amount after being recorded. For a start, you would want the notes of each dynamic to have a volume that matches, in order to make the instrument response predicable. You will also find that the level of the whole instrument is likely to be raised, particularly for soft instruments. Hence the Natural Volume feature

    Where the difference in techniques is concerned is the actual dynamic range of an instrument, or section. Different developers approach this in different ways.

    DG


  • last edited
    last edited

    @Another User said:

    There is science to be gleaned from all of this, but I think that it is far to complicated for any company to solve in the short term, and what we are left with is generalities. I agree that the science of it ought to be nailed down as far as possible, but also understand that there are too many variables for any definitive answer.

    If you rented a hall, seated the instrument-players in their normal locations on stage, hung a microphone over the conductor, recorded every note on every instrument at lots of different dynamic levels, and if you never touched your gain faders throughout this process, and you never changed the volume of the samples, you'd get Natural Volume; and to use such a hypothetical library, you'd never alter CC7 or CC11, you'd just use velocity or CC1, and it would control timbre and loudness at the same time, always keeping them in their natural relation.

    This hypothetical library would be inferior to VSL in lots of ways, but at least it serves as a model of Natural Volume made easy for the end-user. Beethoven had to decide among ppp .. fff for each note but that's just a simple one-dimensional variable for dynamics. I have to set velocity, CC7, CC11, DynR, Velocity Curve, and probably some other variables I'm overlooking. That degree of control -- 5 independent variables -- is nice when I want to make unnatural sounds, but it only gets in the way when I'm trying to do things like Beethoven did them.


  • last edited
    last edited

    @DG said:

    The other thing to remember is that certain instruments react with their surroundings far more than others, and the convolution IRs cannot reflect this, as they are based on sounds coming out of a speaker.

    I'm skeptical about your point here. I haven't experienced this, and I don't see how the laws of physics would allow for the 'reaction' you're suggesting.

    There's nothing with with being skeptical. I don't have a Physics background and certainly can't point to any scientific data to confirm or deny what I said. However, observation is also a very important to acquiring scientific knowledge, and as I've conducted hundreds of performances in many different venues I am speaking of my own experiences regarding instruments and their reaction to their surroundings.

    DG


  • last edited
    last edited

    @BachRules said:

    I'd appreciate if VSL would inform us how to set things to that there's no dynamic compression or expansion applied to the instruments.

     

    That is not going to happen, because when the samples have been normalized, you can't reverse it. Think about it. Every note on an instrument has it's own dynamic range. On a flute, for example, the higher register can go much louder than the lowest register. So you would have to set the dynamic range separately for each register of each instrument. I don't think that's even possible in VI Pro. And even if it was, it would be a crazy amount of work.

    So, the Natural Volume feature in MIR can only be an approximation. It means that the loudness of the instruments are generally balanced against each other. But it won't account for the cases where you play the lowest note on the flute, and assign a cc 1 value of 127. It will be too loud compared to the rest of the instruments (simply because the low notes of the flute in VSL have the same dynamic range as the high notes, which in reality, they haven't). That's where we'll still need our brains and our ears. But in my experience it's less of an issue than you'd think. In most cases the balance is good enough that it won't disturb the 'reality' of the mock-up. And in the very few cases where it does: simply adjust the volume a little.

    So, I guess, if you don't want to think about balancing and volume, you'll have to buy samples that are not normalized. But even there you'll have to mix. Even in recordings of real orchestras recording engineers will have to make subtle adjustements to balance the volume, so you can't expect samples to not have that problem.

    On the bright side, from all the mock-ups I have heard, volume balance was rarely the main point that prevented it from sounding 'real'. Usually there are a whole lot of other problems in a mock-up that give it away.


  • last edited
    last edited

    @DG said:

    ...I've conducted hundreds of performances in many different venues...

    I haven't done that. I'm curious why convolution would fall short. Convolution has limitations -- it fails to model ways a room might change over time -- but in theory convolution is really good at simulating nature.


  • last edited
    last edited

    @Another User said:

    ... But even there you'll have to mix. Even in recordings of real orchestras recording engineers will have to make subtle adjustements to balance the volume, so you can't expect samples to not have that problem.

    The art of mixing is a separate topic entirely. It's a nice topic, but it's not part of this topic.


  • To achieve what you want you would have to record each instrument in it's respective place on stage, with the mic placed at the conductor position. VSL is recorded with close mics, in their silent stage. It's a different concept. Each has it's pros and cons (a con of recording in place would be that it's difficult to use such instruments in a solo piece. That's why Orchestral Tools recorded soloists (with a different concept!) on top of their orchestral woodwinds. A con of recording close up is that you have to balance the volume between the instruments). Natural Volume is a good starting point that helps bridge the gap between the two concepts. But still, it's not the same as recording in place and never will be. In short, VSL's approach is more flexible, but needs more work (balancing the volume, placing the sound in a room, applying reverb etc.). If you don't want to do that work, you should have bought a library with a different concept.


  • last edited
    last edited

    @Another User said:

    To achieve what you want you would have to record each instrument in it's respective place on stage, with the mic placed at the conductor position.

    That's not true. What I want can be achieved with software and samples recorded close on a silent stage. You may not understand how software could do such a thing, but I do understand. It's simply a matter of whether VSL wants to offer this feature to their customers. I don't see why you're insisting otherwise.


  • last edited
    last edited

    @DG said:

    ... the dynamic ranges of VI are not really accurate. For example the dynamic range of a Flute is far less than say a Trumpet, but in the VI player the difference is less severe than it should be.

    When I first read this, it made sense to me, but now I don't get it. What's unnatural about this example (warning: loud trumpet)?

    [url]https://drive.google.com/file/d/0B5ZYXb_HdIQhS3FGZkxPUnpOVTA/view?usp=sharing[/url]

    The flute and trumpet are at the exact same location on stage.


  • last edited
    last edited

    @Dominique said:

    On a flute, for example, the higher register can go much louder than the lowest register.... Natural Volume feature in MIR... won't account for the cases where you play the lowest note on the flute, and assign a cc 1 value of 127. It will be too loud compared to the rest of the instruments (simply because the low notes of the flute in VSL have the same dynamic range as the high notes, which in reality, they haven't).

    This is natural or unnatural?

    [url]https://drive.google.com/file/d/0B5ZYXb_HdIQhYXc2ZGVSUlRBVDA/view?usp=sharing[/url]


  • DG and Dominique,

    You have me so confused now. You're saying the current Natural Volume in MIR is a good start but breaks down in some cases. Can you give me a specific example of a case where it's imperfect? Please tell me which instruments to play simultaneously, at which specific pitches and velocities, so that I get unnatural results?


  • As I'm currentlly abroad I can't give you such an example until midweek. I'll try to do it then.

  • last edited
    last edited

    @Dominique said:

    As I'm currentlly abroad I can't give you such an example until midweek. I'll try to do it then.

    Looking forward to it, at your convenience.


  • last edited
    last edited

    Ok, I came up with something here:

    Balance snippet

    It's a snippet from a classical piece, playing twice. One is a recording, one is a mockup I made. Can you tell which is which? Which snippet do you like better, and why? Is there something that strikes you as unnatural in one or both of the snippets?

    Don't worry, these are no catch questions and there are no wrong answers. I'm just really interested how this is perceived. And I hope it can help us discussing a thing or two about natural volume and balancing.


  • Not sure what this is really about. Hair-splitting and arguing semantics?

    Natural volume is a nice feature that gives you a general guideline. It's a time saver. It's there to get you in the right ballpark quickly. But it can't eliminate the necessary task of arranging and mixing music - just like there can never be a go-to channel strip plugin preset that says "apply great professional mix to the track"; and actually does what it promises.

    The CC7 of all my VI Pro instances are set to 127 by default. Natural volume is on and I do what I feel that needs to be done with expression and vel-xfade. Everything eventually ends up being rendered to audio before mixing anyway, the good old-fashioned way.  Every single project is completely individual in terms of fader volumes, groupings, EQ, processing etc. Sometimes, the results are OK, sometimes not so much, but it's not, and cannot be, the job of a plugin or software feature to make sure that my music sounds acceptable through automation and preset values. Keeping trying, making crappy mixes, improving and continuing to learn - it's all one can do.

    No amount of sophisticated software, compulsory and scientifically sound numerical values and creatively named features can relieve you of the necessity of making senseful musical decisions, riding your faders and using your ears. Using one's ears - I realize it's a scary thought, especially in the world of digital DAW music production and the phenomena of "visual" and "numerical" mixing it eintails. But it's just the way it is. Mixing music isn't color by numbers.


  • last edited
    last edited

    @Another User said:

    I'm just really interested how this is perceived. And I hope it can help us discussing a thing or two about natural volume and balancing.

    I'm stuck on statements made by you and DG, which I quoted in my two most recent posts. Without clarification, I don't understand why you and DG believe VSL's Natural Volume is imperfect.


  • last edited
    last edited

    @Another User said:

    We're talking about things unrelated to human perception, things which would still exist even if robots or a plague killed all the humans. That, at least, is how I meant the topic when I personally created it. There are so many other threads for people to talk about art and human-perception. This is probably the least appropriate thread for that. Hijacking the thread to muse about art is disrespectful.... This thread is about objective levels and has absolutely nothing to do with how those levels are percevied by any human's ear. Pretend an asteroid killed all the humans, for this thread. We're just talking about sound waves in a hypothetical world with no humans.

     

    As people continue to misunderstand this topic, I'll continue to clarify that it's unrelated to human perception. It's just about numbers in DAW's and pressure-waves moving through air, not about humans. If I could make that clear using fewer words, I would.

    I see this this forum is short on people who comprehend that the natural world doesn't revolve around them (cf. the Catholic Church's attack on Galileo when he heretically suggested the Universe doesn't revolve around the earth), so I'll just try to step back and observe the artists express their grandiose, egocentric views about audio engineering.


  • last edited
    last edited

    @BachRules said:

    No one's suggested it can. On the contrary, post after post after painful post, I point out that arrangement and mixing are totally irrelevant to this specialized topic.

    I realize that you took the effort to point that out, but I'd argue that the sentiment is mistaken. Why would you ever be concerned about relations of volume if not in the context of musical arrangement, balance of orchestration and the mix. I mean you can do it if it brings you joy or whatever, but in the context of music making, it's idle.