Vienna Symphonic Library Forum
Forum Statistics

194,423 users have contributed to 42,920 threads and 257,965 posts.

In the past 24 hours, we have 4 new thread(s), 10 new post(s) and 77 new user(s).

  • Crash course in convolution reverbs

    Hi,

    I've just spent three hours trying to get a better result with REverence (dutch concert house IR) than I get with a normal algorithm reverb (RoomWorks).

    Could anyone explain how you proceed when using several reverbs for different instrument groups, and then mixing them together? Do you use a reverb on all those as well?

    And if someone would like to shine som light on ER tail cut and ER mix that would be great!

    Thanks in advance

    Fred


  • anyone? There's no right or wrong. I could just really use some input on this :)

    Thanks

    Fred


  • I don't know anything about MIR or Altiverb but, generally, the way you use converbs with respect to orchestral sample libraries is to create space in your orchestrations.  The problem with algorithm reverbs is that everything sounds like they are the same distance from the listener which is not accurate with regards to orchestral seating arrangements.  With converbs, the more reverb signal you add to and an instrument or instrument group the farther away the instruments sound.  This gives you a sonic 3 dimensional aspect to your piece thus more realistic. 

     

    There are some exceptions to this.  For example you wouldn’t want to drown timpani, percussion, tubular bells piano, etc in reverb just because they sit the farthest back in the orchestra because they will sound too muddy and muffled.  In my own experience, I’ve noticed that strings tend to sound better when placed further back in the orchestra then the brass even though they are seated the opposite, but this is subjective.

     

    Watch the demo about mixing for the Special Edition.  I don't have the link off the top of my head.

    Reverb is not an exact science so you kind of have to cut your own path and trust your ears.  If it sounds too dry then it probably is.

     

    Good luck      


  • I suggest you go to Beat Kaufmanns website and purchase his tutorial.  His tutorial teaches you exactly how to do that stuff in the second portion (mixing).  He is also extremely helpful with your personal questions.  He is also writing a new tutorial specifically for use with VSL products which comes out soon.

    Do some searches on thsi forum as well as this topic has been discussed and examples given.  If you plan on using MIR then it will be a different method, but overall more simplistic once you get your templates created is what I am hearing.  I myself have zero personal experience or use of MIR.

    Maestro2be


  • Positioning (rather) dry samples on a virtual stage can be done with early reflections and EQ-ing.

    I have started researching a concept for a VST-plugin that can handle this as an insert in individual input channels that receive single instruments (Tuba e.g) or instrument groups (French Horns, V1, etc).

    It will take into account:

    - position on the stage (using stereo panning of the full width input to match the rest of the processing)

    - distance to the listener (virtual mics)

    - the typical "radiation" of specific instruments (think FHs backward, Tuba upward, brass forward, etc)

    - size of the stage, warmth of the venue, height and depth

    It will be (if ever released! ;-) ) based on several impulse responses per instrument that are combined at runtime. The IRs are not based on real stages but are 3D models (of early reflections only) taking into account the aforementioned parameters. The IRs are not open for editing or loading, but built into the plugin as presets.

    This project is currently in a proof-of-concept phase. So it is not yet even vapour ware :D


  • thanks for the reply, everyone :)

    when you say only ER (early reflections) how do you acchieve that?

    I am using reverence, btw. Thanks for your reply