Vienna Symphonic Library Forum
Forum Statistics

194,478 users have contributed to 42,922 threads and 257,973 posts.

In the past 24 hours, we have 2 new thread(s), 10 new post(s) and 78 new user(s).

  • Mastering Loudness in orchestral recordings

    Hi everyone,

    As you may know I'm one who is constantly seeking better ways to mix and to achieve a realistic recording from virtual instruments. Often I scour the internet looking at discussions on what other professionals and newbies have to say. It's quite astonishing how negative the subject of 'Loudness' can be in some quarters. I wanted to start a discussion on this if nothing else but to put down some ideas that come to mind.

    I have to say that the main disputed point of orchestral mixing seems to be that many people have never learned how to do it, and aren't likely to be in a position to be able to study it from a live point of view. And so these composers turn to the discussion forums asking the oft asked question "How much compression should I use?", and "Why aren't my recordings as loud as a commercial release?". More often than not these questions are met with frustrated professionals that end up chastising the newbies for their lack of apparrant knowledge of audio and orchestral mixing. In fact these questions are asked so often you'd think there'd be a huge wiki article by now explaining the why's and how's.

    After seeing much of this discussions in similar veins it appears to me that several points are not being mentioned by the 'pros'. And these points could actually help the question askers much more than being told "Go take a college course!". So here's my thoughts on these oft asked questions .. feel free to chime in and correct me.

    1). Question: "How much compression should I use?"

    This usually gets answered with things like 'How dare you even mention the word compression!!'. The better answer should be that in the Virtual orchestra world it's nearly impossible to make a recording of decent volume without compressing your tracks to some degree.

    Obviously things like, how you've programmed your velocities and expressions, how close you want your orchestra to be to the mics, and what post-processing you're using .. will all affect the outcome of the overall loudness .. but the fact is that I've never yet had a recording of a big orchestra (with VSL anyways) that didn't need some compression to beef up the volume. That doesn't mean you have to crush it to pieces, be sensible with your compression ratio and just use it to gain volume and stop when you are at a good level.

    Yes compression shouldn't be over used, afterall you want to preserve the dynamic range of your recording. In real world recording you may not need compression because a), you're recording in a real hall with real mics, and b), you're probably using a pre-amp to beef up the recording volume anyways. Also you don't need to worry about setting depth and reverb since the hall sound is already there.

    That's something else I've discovered. In order to replicate a big environment, your instruments have to appear to be further back in the virtual hall, and in so doing you end up having to loose some volume in order to achieve the effect .. otherwise you'd have an overbearing amount of wet and dry sound to worry about that will most likely overdrive your main output and cause everything to clip.

    So I think the answer here is, that you probably don't want to use too much actual compression, but by all means use an overall compressor to raise the overall volume level so your recording comes through at an appropriate level. And use a limiter to get the rest of the gain after you can't compress anymore.

    2). Question: "Why can't I get as loud as a commercial recording?"

    This also usually gets met with "Why the hell do you want to have your recording as loud as an AC/DC record!!". This annoys me a lot because it's quite clear that the people who are asking this are not particularly wanting to be as loud as is possible to break the monitor speakers .. but simply to be as loud as is the expected industry level. Lets face it when you put as CD in your stereo and find that you have to crank that sucker right up to 9 (on a scale of 10) to hear something at a useful level .. who doesn't think 'Hmm ... this was badly mastered!.

    Regardless of the quality we all want to achieve a certain volume in order to appear competent at our mastering process. Of course the thing here is that the industry .. at least in pop / hip-hop recordings are using extreme peak level limiting in order to have that louder than is technically possible loudness. They're using very expensive plugins and mix engineers who have at least 20 years of secret tricks up their sleaves to achieve this kind of sound.

    Is is necessary to have this kind of loundess ... no I don't think it is. As long as your recording reaches an expected and happy level of loudness .. and as long as your orchestral recordings remain dynamic then it doesn't really matter. Innovative plugins like Ozone 3 will allow you to boost your levels to some degree, and a clever use of compression, limiting and EQ'ing will also improve your mastering level but these are things that take a while to learn. I would say here .. don't try to compete with the industry .. just try to make your recordings work at the level you find acceptable.

    3). Question: "How do I use reverb - I just can't get my orchestra to sound natural?"

    Well this one is a well worn one for me. I've been studying reverb for virtual orchestra for many years now and I still don't have any one way that I like to work with this. Again most 'Pros' end up blasting the newbies for not knowing anything about this and I guess I can empathize here a little as this is just one of those things that can't be answered quickly because there are just too many variables to consider. Of all the recordings that I've done to emulate a real orchestra it's usually the ones with the smallest setup, that were figured out by listening to the sound .. that end up sounding right ... rather than the ones I've tried to work out mathematically and trying to think cleverly about using Early reflections, depth etc.

    Sometimes just applying one reverb for the whole thing and using a bus send/return system for one reverb can sound more pleasing and realistic that by using 14 different reverbs. Believe me I've tried them all :P. With reverb it's really all down to the Impulse response that you decide to use. Some IR's just simply sound crap. I'm fond of using the TODD-AO Ir's in Altiverb but there are times when I just hate them because they don't give me the right kind of realism. Lately I've taken to using the 'Sydney Opera house' set because it simply sounds right for what I'm doing. And no you don't NEED altiverb to have a good reverb .. the fact is that the built in reverbs on most DAWS are good enough to provide a decent room sound, but I guess it's nice to be able to look through different high end reverbs and be able to compare for yourself what seems better.

    And also it really does depend on what sample library you're using. I personally love using VSL because of it's versatility. One can make VSL sound small and close, or big and far with the right kind of reverb setup. Whereas other libraries have a built in sound which is fine but for the most part I don't think I've heard many recordings of a sample library orchestra that don't sound fake to me. Mind you that is mostly down to the performance of the instruments. VSL has the advantage of an amazing legato system that allows for a very realistic playback .. add that with a decent reverb and depth setup and half your battle is already won.

    Anyways this isn't meant to be a plug for VSL, but it's true that I have enjoyed much more realistic performances with VSL instruments that with other libraries.

    A question of Experience:

    I tell you I'd give a LOT to be invited to sit beside an engineer who's recording a classical performance or a filmscore. Because there's no end to the detail one could learn from such a visit. Seeing how they position the instruments .. how much of the close mics they use .. how much of the room mics .. how much processing they use, how much 'fader riding' they do during the mixdown, .. what kinds of gear they're using. The questions are pretty much endless and the sad fact is that the ONLY way to find out is to be able to befriend a mix engineer who can show you these things. For most of us that simply isn't going to happen. There's no book in the world that will describe a good way to mix virtual orchestra .. or indeed to describe how a real orchestra is mixed in the settings of a classical recording, or even a filmscore.

    There are books and articles which show a rudiment of how to mix in general but it's incredibly hard to find the right kind of information for the kind of music that we all write here on this forum, and on others. So to the Professionals I can only say 'Try to be kind to the newbies .. it's not like there's any easy way for them to find out for themselves .. if they could they wouldn't ask what appear to be inane questions!'. And to the newbies .. well you've already probably had a telling off from the higher ups so you know not to bother them too much if you can help it. But don't stop experimenting .. perhaps the best way to learn is to throw yourself into the mix .. try everything, every plugin .. see what it does. Compare your mixes till your ears fall off, compare them with recordings that you want to sound like and just keep trying. If nothing else you'll learn quickly what everything does .. even if you've no idea what it's called. If you're unsure about technical terms like "RMS", or "Ratios" or "Limiting and compression" .. look it up on wikipedia .. and try to learn what is all does.

    As I said at the beginning I intended this thread to be a discussion on discussing approaches to mix. If you have some ideas on better ways to mix then please do chime in .. I'm always interested to hear ideas and to try them out for myself. What I've tried to lay down here is some of the common questions that usually end up in an argument, and these topics more often that not seem to have missed the crucial points on what could have been answered with rather than just blasting the newbies.

    So .. How do YOU do your mixdowns? And what do you think is the proper way to do them?

    Good questions I think you'll agree :P


  • Hi,

    In my opinion, another very important newbie issue is articulations and MIDI velocities. I know its not exactly about mixing, but I think its relevant to what Hetoreyn is discussing, about how we create music with sample-based orchestral libraries.

    There is a profound psycho-acoustic illusion that happens - an instrument may seem like it is being played at a consistent volume and style, but in actual reality there can be surprisingly large differences in volumes and articulations between (and within) each note. This is more than just "humanizing" by introducing randomness. A professional performer has solid control over these subtleties, and often the performer himself/herself doesn't even realize they are varying the volume and attacks of their notes, but they are just being musical.

    When hearing newbie compositions, the compositions are often pretty great, but if the articulations sound bad, I think audiences quickly react negatively to it - it is at least as important, if not more, than mixing problems such as loudness or reverb for beginners.

    Newbies should not be afraid to take some extra time to try various crazy articulation changes (sometimes even a new articulation per note) and exaggerated dynamic changes that emphasize rhythm or tension in a passage =) With practice, this extra time effort disappears, but the knowledge gained of how "real" and "perceived" volume and articulation changes correspond to each other is a very powerful tool to convey musicality.

    Perhaps the best example to illustrate what I'm saying is fast 16th notes. Round robin samples help greatly, but varying MIDI velocities enhances musical phrasing. Its possible to add some "vigorous" feel by accenting the 1st and 3rd 16th notes (out of every 4 notes) just slightly, accenting the 1st slightly more. When doing this, tension can be built by removing the accents just before an important downbeat, instead using a small crescendo in the MIDI velocities. In some cases, it might even be useful to accent the 2nd 16th note immediately after a downbeat, to overcome masking issues or to emphasize the start of a fast passage.

    In another example, I found that inserting a sustained note among sforzando notes can sometimes sound like a nice tension build into the next sforzando downbeat. Its not something that might actually be found in the written score, but it still would have sounded that way when the musician played it.

    Hetoreyn, about reverb - are you suggesting that non-convolution reverbs available in most DAWs can sound convincing enough? If you are claiming that, I would be interested to know some details about how you use them. Also, you mentioned that your "small setups" often sounded better - can you please elaborate what those setups are? I would like to try some of those techniques if you recommend them. In my (limited) experience, I was unable to get anything to sound authentic, except a "true-stereo" convolution reverb... and even then, comparing it to classical CDs I have, there is a sense of "space" that comes from realistic early reflections which even true-stereo convolution reverbs cannot achieve. I'm on the verge of getting Vienna MIR just to solve this problem... but I'd like to try what you recommend first =)

    Cheers, ~Shawn

  • last edited
    last edited

    Well I guess what I mean by a small setup is by saying that everything runs through one reverb through bus send / return (See Christian Kardeis's templates for a good example of this), thus reducing the amount of CPU power needed to create the virtual room. Also by doing this you automatically create homogeny amongst your instruments since their 'in' the same room. Of course this only really helps if you're using just one sample library which has instruments recorded in the same environment.

    Of course a convolution reverb will certainly give more realism than an artificial reverb when used correctly. But recently I did a test mix in ProTools using the D-Verb only and found that sound was pretty pleasant to listen to. It wasn't super realistic but it was something that I thought was more than passible as a production sound. I guess what I mean to say is that you don't have to spend hundreds of euros on Altiverb if you don't have that kind of money. There are reverbs that, if used properly, can produce a good sound.

    That being said I almost always use either the Vienna Convolution Reverb or Altiverb, but that's more because I've had several years of creating templates designed around these kind of reverbs. But I have also tried using Platinum Verb in Logic, and D-Verb in Pro-Tools to good effect. Yeah I do think you need a convolution reverb to truely give you a realistic space .. but a lot can be made of a decent artificial reverb when used right.

    To answer your question on space .. yes this is something that is just extremely difficult to achieve. I have found this best way to give your recordings space is to use a near and far reverb. And to place your instruments at a mid-far distance and have very little up close. In panning you should make sure that no instruments exceed more than 50% of left or right and that you don't use too much limiting for stereo spread ... things like flutes sound far more realistic when they're coming out in full stereo than they do being limited.

    I've found that if you want space .. and realism .. then you need to have less dry sound (kinda obvious I know), but you be surprised how many people choose to use a very dry sound claiming that this sounds more real. The problem is that it simply sounds 'close', and if your performances of those instruments is weak, then all the flaws in the performance will show up and highlight the fact that this is a computer based performance. By setting back the instruments you're helping to hide the nature of that performance .. and also you have to remember than when you go to hear a symphony .. you're not there on stage with your ears next to every instrument .. you're sitting back at least 3-10 meters away from the orchestra. Hearing as much of the room tone as you are the orchestra.

    Some examples of my own mixes:

    Here's an example from my own work from 2008:

    02-Carter-vs-Klingon.m4a

    In this case I'm using the TODD-AO Ir's in Altiverb. I don't say that this mix is perfect .. but for me it has a sense of good realism and sounds nice an roomy. I used 6 instances of altiverb to do this (Strings: near and far, Woods near and far, Brass near and far) and an overall VSL reverb to achieve a little more distance. The setup is rather complicated to say the least. I think it came out sounding nice and big but there's a certain lack of essence in the decay of the instruments that I could never quite get right.

    Now when you compare this to a newer piece that I did:

    07_Escape_plan.m4a

    This piece only used two reverbs. The main difference is that I had learned how to better sit my instruments in there respective depths and also I've chosen to use a more pleasing IR. The setup for this piece was FAR simpler. Using Pre-fade bus send / return into one reverb and an overall reverb to provide extra space. The main reverb was the TODD-AO IR (Mono to Stereo Wide at 15m70) and the overall reverb was the VSL reverb using ORF Sendensaal MF Warm. The result is not so bad .. again not exactly without it's flaws but a fairly passible imitation of a real orchestra.

    So my point is that the smaller mix setup I used turned out to be better, or at least easier to use, than the big setup which required 7 total reverbs .. and frankly was a pain to balance. I will be publishing my 'Osiris' template soon for all to try but I'm still ironing out some of the bugs.

    If you really want a big sound then turn down the 'dry sound' and the 'near sound' and bring up the 'far sound' .. though be weary of washing out your instruments. It comes down to what kind of room your looking at performing from. If you want the sound of an opera house then you can feel free to apply a touch more 'far' than 'near' sound because you want the overall sound. If you're trying to do a film score style recording the chances are that you want a little less of the 'far' sound and want to actually balance the orchestra to gain maximum volume from all quarters.

    I would dearly like to try MIR as I'm vey intrigued at the possibilities of it. The problem is that I don't have a computer capable of using it, and the Mac Version isn't due for a while yet. Sadly my old clunker of a quad G5 no longer cuts the cheese for a high end computer :sulk:

    I think MIR will solve many of the spacial problems involved with mixing virtual orchestra. I'm also interested in the 5.1 surround options though as yet I'm not sure if any of that means much since most of what I monitor with is still stereo. Surround sounds like a good idea but it's only useful if you have a surround system to listen to it on. I don't think there's any spacial listening advantage to mixing in 5.1 if you're only going to output it all to stereo.

    I agree with what you say about articulation control. Nothing irritates me more than listening to someone having chosen a sustain patch on a legato line for horns. Nothing sounds more fake than brass when not played correctly. You have to choose your articulations carefully .. if nothing else just listen to them .. if they sound crap then you've got to use something else until they sound right. And velocity control is very important. It's all about creating the performance. If the performance of your virtual instruments isn't there .. then no amount of reverb or mixing will help.


  • Hi Hetoreyn,

    Thank you for sharing you insights and music. I think you have achieved your goal of realistic sound. In a recent post, I asked if adding a mixer to MIR would add anything  useful to MIR in its present form (build 1719)? I am running MIR on a VisionDaw  PC with 32 megs of ram and Dual core Xeon X5560 2.8 GHZ, RME Hammerfall  96/52 audio interface, the samples are spread over four sata drives. This beast is driven by a Mac Pro running Logic 9 and MIDIoverLAN  set up. The PC was built by Mark Nagata at VisionDAW company. I mention this because when I decided to use the MIR software I wanted  a system that would run the software without a lot of glitches. I achieved this goal  with this system.

    All of my music is based on an initial improvisation. I set up MIR with the venue, microphone, and place the instruments on the stage. In Logic 9 I chose the instrument set. I the case of  “009 Violin and Woodwind” (http://www.youtube.com/user/Bachbeatty)  I used Solo Violin , Solo Viola,  Solo Cello and various solo woodwinds.  I set the temp at 60 bpm and play the composition start to finish live from a piano keyboard.  The key velocities, that Shawn mentioned , variations in tempo  and other expressive factors happen in the improvisation.  There is no written music or any real  preplanning , sometimes a brief warmup  before beginning. The planning phase is actually the setup of the MIR and Logic 9. Once the sequence is recorded then the editing phase begins.  This includes correcting mistakes, separation of the composition into individual voices,  adding articulation controls, expanding the instrument set based on the composition,  optimizing  the instruments playing range,  adjusting the overall  tempo. The mixing is actually in place as you play the composition in the MIR environment.  The instrument’s sounds reflecting the venue  (hall) chosen,  instruments can be grouped to adjust their overall volume,  character  Pure , Air  etc. , you can insert compression for each instrument (there is no compression in the example  009). In the output channel you can chose a microphone setup  (which you can fiddle with) change the microphone position, set a master EQ, and a Room EQ , this software does everything  except cook supper! I still wonder why VI wants to add a mixer.

    I have never thought midi generated music’s goal was to sound like an actual orchestra.  Rather,  I think of the system as a powerful and flexible sound generator  capable of realizing  individual creative  expression. I don’t score the music for an actual orchestra anymore if anything, I would register the midi scores for copyright and if someone wants to  transcribe it into a score there would be enough information to do so. Three months ago I started posting finished music on you tube as underscores to videos that I shot. There has been a modest response but better that any website or audio blog that I have tried in the past.

    Not everything you compose is worth keeping and some times I get a composition all the way through the process and then apply the delete key, but that goes with the territory.  Your brain with all its musical neural connections formed over the years is a lot smarter than the conscious “you”. The act of creation  is like diving headfirst into the dark off of the summit of high mountain trusting that you can make the journey skillfully and end successfully.  In other word, trust the brain and ignore the smart kid that everyone thinks is so musically gifted. A lot has changed in this field  since the mid eighty’s when I started to compose music using midi, and it will continue to change so don’t get into cement about anything.  I don’t know if this helps, but it is a snapshot of how I work.

    Regards,

    Stephen W. Beatty   


  • last edited
    last edited

    @hetoreyn said:

    I would dearly like to try MIR as I'm vey intrigued at the possibilities of it. The problem is that I don't have a computer capable of using it, and the Mac Version isn't due for a while yet. Sadly my old clunker of a quad G5 no longer cuts the cheese for a high end computer :sulk:

    I think MIR will solve many of the spacial problems involved with mixing virtual orchestra. I'm also interested in the 5.1 surround options though as yet I'm not sure if any of that means much since most of what I monitor with is still stereo. Surround sounds like a good idea but it's only useful if you have a surround system to listen to it on. I don't think there's any spacial listening advantage to mixing in 5.1 if you're only going to output it all to stereo.

    Hi Hetoreyn,

    Firstly I must say this is a great thread. Thought you should also know that I am a big fan of your VSL podcasts and eagerly await each release!

    I am thinking to invest in a convolution reverb soon, though am seriously considering to wait for the Mac release of MIR Pro - the PC version sounds breathtaking and the interface definitely provides a much more musical approach to mixing and using reverb. I am also aware that 3rd party plug-ins are now supported in MIR, which also REALLY appeals to me. My alternatives right now though are:

    1) Altiverb - clearly you have vast experience using this with the Vienna Instruments. Is it really worth forking out £400-500 for it? Bare in mind I also use ProjectSam Symphobia, Synthogy Ivory and other VIs, and so want to be able to blend all of them convincingly.

    2) Staying put with Space Designer in Logic Pro 9 - would you recommend I hold out until MIR Pro for Mac is released? Another question I have regarding reverb techniques - I currently send all of my orchestral instruments to one reverb output. I am considering grouping sections (ie. Brass, Strings etc.) as am struggling to produce a sufficiently realistic sound in terms of space and depth. I am sure much of this is to do with my lack of technique as opposed to Space Designer's limitations, nonetheless would you recommend doing this?

    Your thoughts and the thoughts of others reading this would be highly valued.

    Cheers [:)]


  • This is a really good thread and those are very interesting points about maybe the most important overall subject to make your music sound both realistic and expressive.  i have also had many problems in this area, and was using altiverb mainly though occassionally Lexicon hardware reverb in a rather complicated way.  however, I just started using MIR SE on a not-that-expensive computer (i7 920 with 24 gb ram) and what I am noticing is how it completely changes all of these aspects of the sound, because now the original instruments are placed within an acoustical environment that is completely right for them and based upon orchestral sound instead of generic sound - like Altiverb which is reverb for anything, not purely orchestral instruments.  Also, Altiverb is not a mixing environment, just reverb.  MIR has completely revolutionized my mixing concepts since it blends together all the elements into one environment and really works.  When you can place the instrument in its proper position and image in relation to the others, the other elements of sound - like levels - become so much easier to deal with.  An example of this is on a large mix I am doing now, I placed the pipe organ in the very back of the hall as opposed to the front, and this brought out the brass which were a bit overbalanced in one section - WITH NO ADJUSTMENT OF LEVELS.  In other words the acoustical environment being right actually did the mixing adjustment automatically. This is exactly what would happen in reality if that organ were in the back (as it of course is) - the sound would be less obtrusive and therefore easier to mix with the brass.  Anyway I am noticing how the extreme realism of the overall sound in MIR profoundly affects all the elements of the mix including especially levels.  Also, the levels built into the MIR instrument presets are the basically correct real-world levels which give a good starting point. 

    Anyway on the final level of a CD I agree that you should be very conservative with compression when dealing with orchestral/film music.  Hip-hop or other  pop stuff can make everything almost one level, but orchestral has to remain extremely wide.   I did a CD recently in which I basically did a kind of "manual compression" in which  I looked at the loudest tracks and made sure that they were not creating an overall much lower level with a few big peaks that had to be normalized.  Just doing that can often bring the entire level up by seveeral decibels without any artificial compression, and it is completely undetectable musically speaking. 


  • Hey there,

    Altiverb (AV) certainly gives the user many options, such as speaker placement, separate control over reflections, EQ, and all other kinds of fiddly things but I'm not really sure how much of this is useful for realistic mixing, and how much of it is actually getting in the way. These days I tend to use a rather minimal amount of features in AV and I'm getting better .. or rather less frustrated .. results.

    The main thing that altiverb has going for it is a huge library of Impulse responses (IR's) .. however other reverbs are catching up with this. VSL have their room packs for the VI reverb and for MIR which offer many new nice room sounds. AV is an awesome reverb for sound engineers who are seeking very specific room sounds like 'a small bathroom', or 'cupboard', IR's .. but in my opinion it is not the only reverb that will offer acceptable sound for virtual orchestra.

    Space designer (SD) in Logic will actually offer the same kind of quality as AV. The only thing to get right is to choose the right kind of IR for SD, and there are many on the net that you can pick up for free. all one really has to do is just use the reverb correctly .. meaning that if you assign your instruments to your reverb with a bus send / return configuration, then you will be able to get as much benefit from this as you would with AV. Perhaps the biggest difference is that you won't be likely to get the 'TODD AO' or 'Paramount stage M' IR's for Space designer .. but that doesn't mean that you won't get a good IR out of SD if you look for it.

    I'd say it's worth while investing in AV if you have the money as it will give you more choices, and that's never a bad thing. But if your budget is tight and you already have SD then I would say that the SD reverb should suffice and be able to give you a great sound. Beat Kaufmann warned me early on when I asked him about AV that I shouldn't expect it to solve my problems .. and he was totally right. AV seemed like a great choice to begin with .. but I've noticed as time wore on that it was really more bother that it should have been. And If I'd have stuck with SD I would be 500 Euro better off :P

     As for achieving homogeny amongst your different libraries it's difficult to say how this is best done. I use a limited amount of EWQL instruments in my recordings (usually percussive instruments) and I find that using the middle range mic positions and assigning a small amount of reverb to them (the same reverb that it going to the VSL instruments) they tend to sound fine. But if you're using lots of different libraries then yes I can imagine there being depth issues. This is one reason why I prefer to use one library primarily. Again if you use reverb in the Bus send / return config then that gives you a lot of control to be able to use less reverb on the libraries that have more 'native' reverb already included.

    All in all I'd say hold out for MIR and for the mean time just try to get the best out of SD. SD is quite capable of giving you a lovely sound .. as with all things it's just a case of learning to manage it and trying to find the best way to use it. MIR will most certainly give you the exact level of control you could possibly ask for .. with much less fuss.

    As for the issue of whether or not to use one reverb or to use it in sections. I've tried both ways and one offers more possible control than the other .. but I've found that really both ways will give the same sound ultimately. In Pro-tools 8 I have the sections separated out into 4 groups, mainly so I can avoid overloading the buses and have control over the volumes for the sections .. but often I find that this approach really doesn't help improve the depth or sound of the reverb. In logic I've used just one bus reverb (and one on the main output), with the right kind of attention to the bus sends this really gives me all the control I need to adjust depth. Really I think it's all down to what IR you're using and how much closeness or depth you're using on your instrument.


  •  I think one thing worth noting about MIR as opposed to Altiverb, SD, etc. is that all those others are generic reverbs - they are for sound effects and dialogue in movies as much as for music. 

    However, MIR is specifically for orchestral music in a concert hall (or stage).  This is what is making a huge difference for me since I do not have to do tricks to try and coax perfection out of generic reverbs and mixing the way I used to (try).  MIR automatically creates a nearly perfect orchestral placement, depth, level and reverb.   Also, whereas Altiverb gives you on any given impulse set about four different actual sampled player positions at most (on the Todd AO) MIR has dozens, front to back, side to side, directional.  It is a quantum leap over all the others because of the amount of sampling that went into it as well as the interface which is designed for musicians. 


  • Dear All,

    I agree with William on MIR - it is a milestone change to workflow, and an astonishing leap forward.  Between MIR and Lexicon PCM Native Reverbs (to add a sheen in the same manner as a mastering reverb), I've just about stopped worrying about reverb, except to decide how much I want.

    However, I just wanted to add something to this discussion about a subject that does not yet appear to have been mentioned in too much depth, but in my opinion has probably the greatest contribution to creating a realistic sound with sampled instruments.  Orchestration.

    I know this thread is mainly about mastering and reverb, but writing piccolo lines that sound piccolo-y, violin lines that are violin-y and horn lines that.. well ... you know what I mean, and thinking about ensemble, impact, what part of an instrument's range will truly sing, cut through or support the texture are surely the most important decisions aside from composition.

    This is not to say that the tools (e.g. VSL) should not be used in an orchestrally counter-intuitive or experimental way (for example, see Ionisation by Varese) - very interesting effects can be achieved by programming impossible or non-idiomatic lines.

    However, if you are seeking to produce the most realistic orchestral sound using sampled instruments, before you worry about continuous controllers, velocity, reverb or anything else, take a good look at your orchestration, and listen to it.  No amount of reverb or compression will mask errors in this.

    Kind Regards,

    Nick.


  •  That's absolutely true and I've personally experienced trying to make mistakes in orchestration sound good and nothing could do it.   Hetereyn does some excellent orchestration in his music though and what he has described is what I have been concerned about also.  Especially since I am also doing CDs and so need to make sure the overall loudness is right.    But MIR is a tremendous step forward in the overall sound, depth and placement and what I have noticed is that when those things are right you can get the others right so much more easily.


  • Just to clarify - my previous message was not intended as a comment in any way specifically upon Hetoreyn's orchestration, which I happen to think is very effective.  It was a much more general comment.


  • Hi all,

    Its very helpful to hear everyone's opinions so far! I had many more thoughts to add, too:

    (1) I tried Vienna MIR, and I completely agree that it is a remarkable advance for mixing realistic acoustic spaces! However, for those that will still be using convolution reverb, here are some of the mixing techniques I personally use:

    - Obviously, stereo panning and volume levels would be tweaked appropriately.

    - Adjusting wet/dry mix. For example, strings could have *slightly* more dry mix than the brass.

    - Adjusting stereo spread. For example, I found that woodwinds sound more authentic with a very narrow, close-to-mono stereo image. Even if the image is reduced close to mono, the instrumentws are still panned in stereo. It is also important to note that increasing the separation or spread beyond the original audio is often done by introducing delay between the L and R channels, and that delay can interfere with the early reflections in the reverb.

    - Using EQ/filtering to simulate distance in addition to reverb. For example, horns are usually in the back of an orchestra with their bells facing backwards - and their high frequencies get significantly damped by the time the audience hears it. Some concert halls suffer from "bass loss", which has to be manually simulated in the dry mix, too.

    - Using EQ to remove un-desired tone colors. For example, the snare drum has some pitched resonance that I prefer to remove. The "bite" on the tenor and bass trombone when it plays loud staccato or sforzando - I used an EQ to reduce this... its possible a multi-band compressor could be used in that case, too, but I didn't try that.

    - Tweaking EQ and other parameters on the impulse response itself. The impulse response I used was a boomy auditorium. But it sounded decent (I was surprised!) after reducing the low frequencies.

    - Techniques described in my next point below

    (2) Three things to say about "compression" to increase the overall loudness of the audio:

    First - It seems that people on this thread are OK with careful use of compression. Many others are opposed to it because its "evil" and "unnatural" for a pure authentic orchestra. Personally I agree its OK to use careful and subtle compression. One technique in particular, citing Bob Katz in his book "Mastering Audio"... using a transparent compressor, with a very low threshold (so that most levels are above the threshold, perhaps -30 dB) and a very low ratio (perhaps 1.1 : 1 or even less). With this setup, most of the sound is being gently compressed, but the compression curve is very smooth, and the correct relative loudness still exists at all scales. If we add the fact that we are in software, and it is possible to have look-ahead compression with theoretically 0ms attack, the compression can be quite transparent, but still give the opportunity to increase overall loudness by 2-3 dB, or possibly even more. And finally, it may be worth considering parallel compression as long as it does not introduce comb filtering phase issues, but I don't know much more about parallel compression.

    Second - I think real recordings do more "manual gain riding" than we might initially expect. Once in a while I think I can hear it happening on classical CDs I have (I could be fooling myself, though). At any rate, I think its perfectly acceptable, as a replacement for compression in many cases, to manually automate volume levels in the mix. "manually automate" means that you create automation for the volume sliders, where you manually defined how the sliders should change over time. For example, supposed there is a soft quiet passage followed by a sudden powerful fortissimo. One possibility is to keep the soft passage a little bit louder, and introduce a very gradual, unnoticable decrease in volume (say, 0.2 dB per second perhaps) before the fortissimo passage. This keeps most of the quiet passage at a comfortable volume, but also keeps the powerful dynamic contrast when its needed.

    Finally - we shouldn't dismiss the psychology of perceiving volume. The timbres between loud and soft volumes of an instrument can drastically affect the perception of volume even if there really isn't a change in volume. Most orchestral instruments have an increase of high frequency content when they play louder (I mean, the high frequency content increases more than the low-mid content does, literally their timbre changes). That is what makes compression or manual gain riding acceptable in many cases. Also, I think that many times, composers do not rely on performers to achieve dynamics for them, instead they orchestrate the music to milk out that extra ounce of dynamics - for example, using mellow instruments for soft passages (with less high frequency content) contrasts more greatly with a bright sound in loud passages (with more frequency content). The point is that we can rely on timbre and orchestration techniques to convey volume changes, even if we use compression or manual gain riding to reduce the actual dB range.

    What do you all think?

    ~Shawn

  • Suon, that is a great post and very good information!  I agree with that and would add that specifically I have found the EQ is essential in the following cases (and wonder if anyone else thinks this) -

    Certain low instruments such as contrabassoon, bassoon, cellos, tam-tams and the low range  of the harp MUST be EQ'd because you are hearing bass frequencies that are the result of close miked sampling sessions that you would NEVER hear in a real concert hall.  I have noticed this especially with the harp, cellos and tam-tam recordings which have far more bass when heard up close than they do when placed into an orchestra setting. So to use those without EQ sounds very weird. 

    I have noticed that with MIR the violins when using the "warm" setting do NOT have to be EQ'd down any more in high frequency. It is a perfect EQ for them. I always used to worry about the excessive high freq which seems to occur no matter how good the samples.  But in MIR this is no longer a problem.  Also, the MIR setting of "bite" is really a great EQ for bringing out a low bass like the contrabassoon. 

    I agree on the use of manual envelopes, and this brings up the concept of doing various things in the mixing that actually are mastering techniques, but better accomplished early when still mixing.  For example, if your levels are carefully monitored on very loud instruments like brass/percussion during the mix, you can avoid having to take the entire mix down artificially at the time of mastering.  That is similar to your point about the instrumentation, and actually very much like what a conductor would do in  telling percussion for example to not make an ff quite so loud, etc.


  • last edited
    last edited

    @hetoreyn said:

    Now when you compare this to a newer piece that I did:

    07_Escape_plan.m4a

    This piece only used two reverbs. The main difference is that I had learned how to better sit my instruments in there respective depths and also I've chosen to use a more pleasing IR. The setup for this piece was FAR simpler. Using Pre-fade bus send / return into one reverb and an overall reverb to provide extra space. The main reverb was the TODD-AO IR (Mono to Stereo Wide at 15m70) and the overall reverb was the VSL reverb using ORF Sendensaal MF Warm. The result is not so bad .. again not exactly without it's flaws but a fairly passible imitation of a real orchestra.

    So my point is that the smaller mix setup I used turned out to be better, or at least easier to use, than the big setup which required 7 total reverbs .. and frankly was a pain to balance. I will be publishing my 'Osiris' template soon for all to try but I'm still ironing out some of the bugs.

    If you really want a big sound then turn down the 'dry sound' and the 'near sound' and bring up the 'far sound' .. though be weary of washing out your instruments. It comes down to what kind of room your looking at performing from. If you want the sound of an opera house then you can feel free to apply a touch more 'far' than 'near' sound because you want the overall sound. If you're trying to do a film score style recording the chances are that you want a little less of the 'far' sound and want to actually balance the orchestra to gain maximum volume from all quarters.

    Hi Hetoreyn,

    Thanks for your reply. I am currently in the process of setting up a new template with Vienna Ensemble (the free version) in Logic Pro. I have 4 instances of this (one for Woodwinds, Brass, Percussion & Strings). I have set up Aux tracks for each of the 16 MIDI multi-timbral channels (with the exception of MIDI channel 1, as the main MIDI channel will be used for automation within Logic, with the other 15 Aux tracks set up for automation..does this make sense? Is it even worth setting up Aux's for automation or do you just adjust levels in Vienna Ensemble?)...

    Looks like I am going to try to set up 2 reverbs...one main reverb then one on the master output. Now some questions...

    1) You mentioned that you set up a pre-fade bus send / return on each of your tracks...why pre and not post?? Could you explain the difference between the two as this is a concept I have never really managed to grasp.

    2) What is the point in setting up another reverb on the master output? Won't it mudden the sound? Does it make sense to double the reverb on the master output with the same IR? I found a custom-made Todd AO IR for Space Designer so will experiment using this on both reverbs for the time being.

    3) How does one go about sitting the instruments in their appropriate depths? Has this got something to do with using pre-fade sends and adjusting the wet/dry mix using the channel faders in Logic's mixer? [:S]

    Cheers


  • Hey,

    To answer your first question. The whole thing of Pre or Post fade is a bit odd to explain but i'll try:

    Essentially using the pre fade mode turns the fader into a wet / dry controller .. where the louder you have it .. the more 'dry' signal there is .. so things sound closer. If you have it low then things sound further away (Of course you have to be running a reverb on the bus send in order to hear this depth change!). So there it is .. your channel fader only manipulates the volume of the dry (unaffected) signal.

    The volume of your reverb is of course controlled by the bus send.

    Now if you decide to use 'post' fade then your channel strip fader is again an overall volume control for the entire signal because everything is going through it. Both methods have advantages and disadvantages.

    Post fade allows easy manipulation of channel volume but if you want to create different layers of depth then you'll probably have to use more than one reverb.

    Pre fade allows greater amounts of depth to be achieved easily with just one reverb for the entire session. However if you want to change instrument volume then you have to manipulate that separately. Personally I find that I use the expression control and Velocity control so much that this is never a problem for me.

    I guess it comes down to which method suits you personally best. Try both and see which one you find easier to work with .. and most importantly .. which one sounds best for you.

    The reverb on the master is my preference for an added sense of depth. It's not necessary but I do find that just the one reverb can still end up sounding a little flat sometimes.

    For a practical demo on using the pre-fade method .. go to the VSL Special Edition tutorial vids .. and check out christian Kardeis and Paul Steinbauers vids on mixing with the VSL SE .. this will explain the routing and everything you need to know .. there are also templates you can download. 

    I mut admit that I'm looking forward to using MIR in the near future as it seems this will surely settle many little annoying problems for reverb and compression.


  • Hey, A composer friend sent me P's "seeking better ways to mix and to achieve a realistic recording from virtual instruments." Things are a bit slow so here goes.

    I'll try to stick to the synths. (Samples or Virtual sound sources) Compression and Loudness are easy to hear but difficult to adjust because there are so many options. First remember that compression is multiplicative. Not sure of that word, but if you compress in your mix at 4 to 1 and apply more compression in mastering at say 6 to 1. Your signal will sit at 24 to 1 not 10 to 1. Ok for "Twisted Sister", but kinda rough for Carmina Burana.

    Master for the delivery format. Sounding good in somebody's car is very different than blending into a film or supporting a video game. Learn from my friends Fetcher and Munson and their loudness curves. I've used Ozone 3 and 4 and many of the "...izer, ...ator, family of do it yourself mastering tools, a lot can be achieved using only a little multi-band compression and EQ, but there's no one stop setting, and again all those settings. It is supposed to be about the music.

    If you've got the gig, make it sound right for the gig, and juice it up for the demo CD/reel later. If it really matters get it mastered by someone who does that for a living. It's an art! There are some books which explain the art of mixing, many of which are a bit esoteric, but the physics of what goes on in the audio path, though permanently altered by Digital storage, are basic.

    I'm fond of a book called "Mixing With Your Mind". Lots of Rock and Roll experience, many practical tips. There's an excellent section on setting a compressor. Samples are only as good as the musician, the instrument and the recording.

    I'm not a programmer and I don't spend time searching out the libraries. Everyone that I've worked for collects and mixes samples like and alchemist. Next they have learned to write for the ensemble. Sampled or live, if it's orchestral, ethnic, whatever, choose lines which the instrument can play well.

    As far as re-verb goes, be flexible. If the samples you use sound like their in the back of the room or hall, adding reverb won't necessarily improve the sound. Reverb is the glue. Far away things don't(naturally) sound loud. Dryer/closer sounds seem louder or bigger. Tempo has a huge effect on reverb. Shorter decay times are less muddy for faster tempos.

    Dry orchestral samples can be made to sound better by using a short or long version of the same reverb. Depending on the instrument and the desired placement. D-Verb, Ativerb, Lexicon, Plugins..all Cool. If your computer can handle it.

    Good quality hardware reverbs (Best!!) have the same processing power of a 8 Quad Mac. If it's important and your computer is maxed out...Rent.

    Improvements are incremental, but it starts with the ensemble palette and the construction of the music. Don't suffer alone. Hire somebody to help you set up your studio or your mix palette. Sure as a Mixer I'd love to be spend a week on your score and suck up your whole budget. P talks about sitting next to a scoring engineer and learning. If you spent x$$$$ going to school to learn more about writing music, what's to stop you from getting help with your mix. Get somebody in for half a day and change your sound. Mixers need and get gigs based on their relationships with composers. Helping out the young guys/girls is in everyone's interest. -

    JV


  • last edited
    last edited

    I can't give you strong advice about mastering .But if you have some questions 

    about sampling you can ask me. 

    I am making an ambient music for along time and can sharing

    some sample packages which i sometimes used in my projects.

    1.https://www.lucidsamples.com/ambient-samples-packs/269-era-of-space-sounds.html

    2.https://www.lucidsamples.com/sound-effects-packs/246-frose-synth-spaces-vol-1.html

     😴