Vienna Symphonic Library Forum
Forum Statistics

194,483 users have contributed to 42,922 threads and 257,973 posts.

In the past 24 hours, we have 2 new thread(s), 9 new post(s) and 78 new user(s).

  •  I think one thing worth noting about MIR as opposed to Altiverb, SD, etc. is that all those others are generic reverbs - they are for sound effects and dialogue in movies as much as for music. 

    However, MIR is specifically for orchestral music in a concert hall (or stage).  This is what is making a huge difference for me since I do not have to do tricks to try and coax perfection out of generic reverbs and mixing the way I used to (try).  MIR automatically creates a nearly perfect orchestral placement, depth, level and reverb.   Also, whereas Altiverb gives you on any given impulse set about four different actual sampled player positions at most (on the Todd AO) MIR has dozens, front to back, side to side, directional.  It is a quantum leap over all the others because of the amount of sampling that went into it as well as the interface which is designed for musicians. 


  • Dear All,

    I agree with William on MIR - it is a milestone change to workflow, and an astonishing leap forward.  Between MIR and Lexicon PCM Native Reverbs (to add a sheen in the same manner as a mastering reverb), I've just about stopped worrying about reverb, except to decide how much I want.

    However, I just wanted to add something to this discussion about a subject that does not yet appear to have been mentioned in too much depth, but in my opinion has probably the greatest contribution to creating a realistic sound with sampled instruments.  Orchestration.

    I know this thread is mainly about mastering and reverb, but writing piccolo lines that sound piccolo-y, violin lines that are violin-y and horn lines that.. well ... you know what I mean, and thinking about ensemble, impact, what part of an instrument's range will truly sing, cut through or support the texture are surely the most important decisions aside from composition.

    This is not to say that the tools (e.g. VSL) should not be used in an orchestrally counter-intuitive or experimental way (for example, see Ionisation by Varese) - very interesting effects can be achieved by programming impossible or non-idiomatic lines.

    However, if you are seeking to produce the most realistic orchestral sound using sampled instruments, before you worry about continuous controllers, velocity, reverb or anything else, take a good look at your orchestration, and listen to it.  No amount of reverb or compression will mask errors in this.

    Kind Regards,

    Nick.


  •  That's absolutely true and I've personally experienced trying to make mistakes in orchestration sound good and nothing could do it.   Hetereyn does some excellent orchestration in his music though and what he has described is what I have been concerned about also.  Especially since I am also doing CDs and so need to make sure the overall loudness is right.    But MIR is a tremendous step forward in the overall sound, depth and placement and what I have noticed is that when those things are right you can get the others right so much more easily.


  • Just to clarify - my previous message was not intended as a comment in any way specifically upon Hetoreyn's orchestration, which I happen to think is very effective.  It was a much more general comment.


  • Hi all,

    Its very helpful to hear everyone's opinions so far! I had many more thoughts to add, too:

    (1) I tried Vienna MIR, and I completely agree that it is a remarkable advance for mixing realistic acoustic spaces! However, for those that will still be using convolution reverb, here are some of the mixing techniques I personally use:

    - Obviously, stereo panning and volume levels would be tweaked appropriately.

    - Adjusting wet/dry mix. For example, strings could have *slightly* more dry mix than the brass.

    - Adjusting stereo spread. For example, I found that woodwinds sound more authentic with a very narrow, close-to-mono stereo image. Even if the image is reduced close to mono, the instrumentws are still panned in stereo. It is also important to note that increasing the separation or spread beyond the original audio is often done by introducing delay between the L and R channels, and that delay can interfere with the early reflections in the reverb.

    - Using EQ/filtering to simulate distance in addition to reverb. For example, horns are usually in the back of an orchestra with their bells facing backwards - and their high frequencies get significantly damped by the time the audience hears it. Some concert halls suffer from "bass loss", which has to be manually simulated in the dry mix, too.

    - Using EQ to remove un-desired tone colors. For example, the snare drum has some pitched resonance that I prefer to remove. The "bite" on the tenor and bass trombone when it plays loud staccato or sforzando - I used an EQ to reduce this... its possible a multi-band compressor could be used in that case, too, but I didn't try that.

    - Tweaking EQ and other parameters on the impulse response itself. The impulse response I used was a boomy auditorium. But it sounded decent (I was surprised!) after reducing the low frequencies.

    - Techniques described in my next point below

    (2) Three things to say about "compression" to increase the overall loudness of the audio:

    First - It seems that people on this thread are OK with careful use of compression. Many others are opposed to it because its "evil" and "unnatural" for a pure authentic orchestra. Personally I agree its OK to use careful and subtle compression. One technique in particular, citing Bob Katz in his book "Mastering Audio"... using a transparent compressor, with a very low threshold (so that most levels are above the threshold, perhaps -30 dB) and a very low ratio (perhaps 1.1 : 1 or even less). With this setup, most of the sound is being gently compressed, but the compression curve is very smooth, and the correct relative loudness still exists at all scales. If we add the fact that we are in software, and it is possible to have look-ahead compression with theoretically 0ms attack, the compression can be quite transparent, but still give the opportunity to increase overall loudness by 2-3 dB, or possibly even more. And finally, it may be worth considering parallel compression as long as it does not introduce comb filtering phase issues, but I don't know much more about parallel compression.

    Second - I think real recordings do more "manual gain riding" than we might initially expect. Once in a while I think I can hear it happening on classical CDs I have (I could be fooling myself, though). At any rate, I think its perfectly acceptable, as a replacement for compression in many cases, to manually automate volume levels in the mix. "manually automate" means that you create automation for the volume sliders, where you manually defined how the sliders should change over time. For example, supposed there is a soft quiet passage followed by a sudden powerful fortissimo. One possibility is to keep the soft passage a little bit louder, and introduce a very gradual, unnoticable decrease in volume (say, 0.2 dB per second perhaps) before the fortissimo passage. This keeps most of the quiet passage at a comfortable volume, but also keeps the powerful dynamic contrast when its needed.

    Finally - we shouldn't dismiss the psychology of perceiving volume. The timbres between loud and soft volumes of an instrument can drastically affect the perception of volume even if there really isn't a change in volume. Most orchestral instruments have an increase of high frequency content when they play louder (I mean, the high frequency content increases more than the low-mid content does, literally their timbre changes). That is what makes compression or manual gain riding acceptable in many cases. Also, I think that many times, composers do not rely on performers to achieve dynamics for them, instead they orchestrate the music to milk out that extra ounce of dynamics - for example, using mellow instruments for soft passages (with less high frequency content) contrasts more greatly with a bright sound in loud passages (with more frequency content). The point is that we can rely on timbre and orchestration techniques to convey volume changes, even if we use compression or manual gain riding to reduce the actual dB range.

    What do you all think?

    ~Shawn

  • Suon, that is a great post and very good information!  I agree with that and would add that specifically I have found the EQ is essential in the following cases (and wonder if anyone else thinks this) -

    Certain low instruments such as contrabassoon, bassoon, cellos, tam-tams and the low range  of the harp MUST be EQ'd because you are hearing bass frequencies that are the result of close miked sampling sessions that you would NEVER hear in a real concert hall.  I have noticed this especially with the harp, cellos and tam-tam recordings which have far more bass when heard up close than they do when placed into an orchestra setting. So to use those without EQ sounds very weird. 

    I have noticed that with MIR the violins when using the "warm" setting do NOT have to be EQ'd down any more in high frequency. It is a perfect EQ for them. I always used to worry about the excessive high freq which seems to occur no matter how good the samples.  But in MIR this is no longer a problem.  Also, the MIR setting of "bite" is really a great EQ for bringing out a low bass like the contrabassoon. 

    I agree on the use of manual envelopes, and this brings up the concept of doing various things in the mixing that actually are mastering techniques, but better accomplished early when still mixing.  For example, if your levels are carefully monitored on very loud instruments like brass/percussion during the mix, you can avoid having to take the entire mix down artificially at the time of mastering.  That is similar to your point about the instrumentation, and actually very much like what a conductor would do in  telling percussion for example to not make an ff quite so loud, etc.


  • last edited
    last edited

    @hetoreyn said:

    Now when you compare this to a newer piece that I did:

    07_Escape_plan.m4a

    This piece only used two reverbs. The main difference is that I had learned how to better sit my instruments in there respective depths and also I've chosen to use a more pleasing IR. The setup for this piece was FAR simpler. Using Pre-fade bus send / return into one reverb and an overall reverb to provide extra space. The main reverb was the TODD-AO IR (Mono to Stereo Wide at 15m70) and the overall reverb was the VSL reverb using ORF Sendensaal MF Warm. The result is not so bad .. again not exactly without it's flaws but a fairly passible imitation of a real orchestra.

    So my point is that the smaller mix setup I used turned out to be better, or at least easier to use, than the big setup which required 7 total reverbs .. and frankly was a pain to balance. I will be publishing my 'Osiris' template soon for all to try but I'm still ironing out some of the bugs.

    If you really want a big sound then turn down the 'dry sound' and the 'near sound' and bring up the 'far sound' .. though be weary of washing out your instruments. It comes down to what kind of room your looking at performing from. If you want the sound of an opera house then you can feel free to apply a touch more 'far' than 'near' sound because you want the overall sound. If you're trying to do a film score style recording the chances are that you want a little less of the 'far' sound and want to actually balance the orchestra to gain maximum volume from all quarters.

    Hi Hetoreyn,

    Thanks for your reply. I am currently in the process of setting up a new template with Vienna Ensemble (the free version) in Logic Pro. I have 4 instances of this (one for Woodwinds, Brass, Percussion & Strings). I have set up Aux tracks for each of the 16 MIDI multi-timbral channels (with the exception of MIDI channel 1, as the main MIDI channel will be used for automation within Logic, with the other 15 Aux tracks set up for automation..does this make sense? Is it even worth setting up Aux's for automation or do you just adjust levels in Vienna Ensemble?)...

    Looks like I am going to try to set up 2 reverbs...one main reverb then one on the master output. Now some questions...

    1) You mentioned that you set up a pre-fade bus send / return on each of your tracks...why pre and not post?? Could you explain the difference between the two as this is a concept I have never really managed to grasp.

    2) What is the point in setting up another reverb on the master output? Won't it mudden the sound? Does it make sense to double the reverb on the master output with the same IR? I found a custom-made Todd AO IR for Space Designer so will experiment using this on both reverbs for the time being.

    3) How does one go about sitting the instruments in their appropriate depths? Has this got something to do with using pre-fade sends and adjusting the wet/dry mix using the channel faders in Logic's mixer? [:S]

    Cheers


  • Hey,

    To answer your first question. The whole thing of Pre or Post fade is a bit odd to explain but i'll try:

    Essentially using the pre fade mode turns the fader into a wet / dry controller .. where the louder you have it .. the more 'dry' signal there is .. so things sound closer. If you have it low then things sound further away (Of course you have to be running a reverb on the bus send in order to hear this depth change!). So there it is .. your channel fader only manipulates the volume of the dry (unaffected) signal.

    The volume of your reverb is of course controlled by the bus send.

    Now if you decide to use 'post' fade then your channel strip fader is again an overall volume control for the entire signal because everything is going through it. Both methods have advantages and disadvantages.

    Post fade allows easy manipulation of channel volume but if you want to create different layers of depth then you'll probably have to use more than one reverb.

    Pre fade allows greater amounts of depth to be achieved easily with just one reverb for the entire session. However if you want to change instrument volume then you have to manipulate that separately. Personally I find that I use the expression control and Velocity control so much that this is never a problem for me.

    I guess it comes down to which method suits you personally best. Try both and see which one you find easier to work with .. and most importantly .. which one sounds best for you.

    The reverb on the master is my preference for an added sense of depth. It's not necessary but I do find that just the one reverb can still end up sounding a little flat sometimes.

    For a practical demo on using the pre-fade method .. go to the VSL Special Edition tutorial vids .. and check out christian Kardeis and Paul Steinbauers vids on mixing with the VSL SE .. this will explain the routing and everything you need to know .. there are also templates you can download. 

    I mut admit that I'm looking forward to using MIR in the near future as it seems this will surely settle many little annoying problems for reverb and compression.


  • Hey, A composer friend sent me P's "seeking better ways to mix and to achieve a realistic recording from virtual instruments." Things are a bit slow so here goes.

    I'll try to stick to the synths. (Samples or Virtual sound sources) Compression and Loudness are easy to hear but difficult to adjust because there are so many options. First remember that compression is multiplicative. Not sure of that word, but if you compress in your mix at 4 to 1 and apply more compression in mastering at say 6 to 1. Your signal will sit at 24 to 1 not 10 to 1. Ok for "Twisted Sister", but kinda rough for Carmina Burana.

    Master for the delivery format. Sounding good in somebody's car is very different than blending into a film or supporting a video game. Learn from my friends Fetcher and Munson and their loudness curves. I've used Ozone 3 and 4 and many of the "...izer, ...ator, family of do it yourself mastering tools, a lot can be achieved using only a little multi-band compression and EQ, but there's no one stop setting, and again all those settings. It is supposed to be about the music.

    If you've got the gig, make it sound right for the gig, and juice it up for the demo CD/reel later. If it really matters get it mastered by someone who does that for a living. It's an art! There are some books which explain the art of mixing, many of which are a bit esoteric, but the physics of what goes on in the audio path, though permanently altered by Digital storage, are basic.

    I'm fond of a book called "Mixing With Your Mind". Lots of Rock and Roll experience, many practical tips. There's an excellent section on setting a compressor. Samples are only as good as the musician, the instrument and the recording.

    I'm not a programmer and I don't spend time searching out the libraries. Everyone that I've worked for collects and mixes samples like and alchemist. Next they have learned to write for the ensemble. Sampled or live, if it's orchestral, ethnic, whatever, choose lines which the instrument can play well.

    As far as re-verb goes, be flexible. If the samples you use sound like their in the back of the room or hall, adding reverb won't necessarily improve the sound. Reverb is the glue. Far away things don't(naturally) sound loud. Dryer/closer sounds seem louder or bigger. Tempo has a huge effect on reverb. Shorter decay times are less muddy for faster tempos.

    Dry orchestral samples can be made to sound better by using a short or long version of the same reverb. Depending on the instrument and the desired placement. D-Verb, Ativerb, Lexicon, Plugins..all Cool. If your computer can handle it.

    Good quality hardware reverbs (Best!!) have the same processing power of a 8 Quad Mac. If it's important and your computer is maxed out...Rent.

    Improvements are incremental, but it starts with the ensemble palette and the construction of the music. Don't suffer alone. Hire somebody to help you set up your studio or your mix palette. Sure as a Mixer I'd love to be spend a week on your score and suck up your whole budget. P talks about sitting next to a scoring engineer and learning. If you spent x$$$$ going to school to learn more about writing music, what's to stop you from getting help with your mix. Get somebody in for half a day and change your sound. Mixers need and get gigs based on their relationships with composers. Helping out the young guys/girls is in everyone's interest. -

    JV


  • last edited
    last edited

    I can't give you strong advice about mastering .But if you have some questions 

    about sampling you can ask me. 

    I am making an ambient music for along time and can sharing

    some sample packages which i sometimes used in my projects.

    1.https://www.lucidsamples.com/ambient-samples-packs/269-era-of-space-sounds.html

    2.https://www.lucidsamples.com/sound-effects-packs/246-frose-synth-spaces-vol-1.html

     😴