Vienna Symphonic Library Forum
Forum Statistics

185,325 users have contributed to 42,390 threads and 255,489 posts.

In the past 24 hours, we have 0 new thread(s), 14 new post(s) and 62 new user(s).

  • Some thoughts on mixing string ensemble using MIR or Convolution reverb

    MIR is coming out now. I'm very pleased with such an easy to use environmental reverb. But still there is some flaw in this reverb. Although this problem can be heared long ago when we are using GigaStudio and VSL. If someone listened to the MIR conceptual mix "Picture at exhibition", you may notice that when violin ensemble plays high and long notes, the sound is thin, likes a solo violin playing or a much small ensemble is playing. In contrast real recording of these passage or notes should be lush, and is enough to judge that they are played by an _ensemble_.

    Back to 2004, I have noticed that no matter what convolution reverb I use, when I play high and long notes using VSL's 14 player violins ensemble, the sound itself is thin and didn't sound like played by a 14 players ensemble. But when I disable reverb, only listen to dry sound of ensemble, it sounds good.

    I realized that it is not the sample itself makes this wired problem. So I took a look into this problem but with no luck. When MIR is announced, I really think this may be a solution to this problem. But after hearing some of its example, I have to say it's disappointing.

    So I re-take a look into this issue. And here is some result. May or may not be helpful, but I found may be it's logically correct.

    Samples are stereo waveforms. We both know using a stereo system, sound can be "replayed back" in an environment. And stereo is enough for re-creating the original sound scape. even if original sound is created by multiple sound source(like string ensemble). But there is a problem when using convolution reverb on stereo signal that holds multiple source sound. Think that what is the difference between an ensemble placed on stage and a stereo signal processed by a stereo IR convolution. When an ensemble is place on stage, actually every single player is "covolutioned" by the environment using different IR. Each player's position difference makes every sound different, and this effect may be the key to a lush sound by ensemble in an environment. Using a stereo sample processed by a stereo IR convolution is different from this. I am sure if you place two speakers in a hall on stage in the position of an ensemble, and do some playback using dry samples, it will be different from a real ensemble playing in the hall on stage. In other words, stereo signal is not enough for re-create soundsource's slight environmental difference in a multiple soundsource situation.

    So how to solve this? In my view the best way is to capture IR in 14 player positions and convolute 14 different solo violin recordings, and mix them together. Actually this is how the environment itself did its magic to ensemble sound. But it's not possible, even if MIR can have different place's IR, 14 different solo violins by sample is hard to create. I have tried another way. I don't have MIR so I can only use Altiverb to do this. Think what is the difference if you move from one speaker to another when they are playing an stereo signal? Yes, the PAN is different. so if I assume that IR can follow the same pattern, try to convolute dry signal using different pan in IR or result, and slightly introduce some delay(most right and center to microphone's distance is different, so their is some slightly delay), then mixed them together. I trid this and the result I have to say, is good. at least better than dry sound through a stereo IR. But still it's far from perfect because IR is almost the same.

    That's why I am thinking about MIR.

    MIR introduced multi placement IRs. If I can divide my ensemble sound(actually copy and paste them, because we can't get different part in a sampled ensemble) into small part, eg. one is 0 - 30% stereo image width, one is 30% - 60% and the last one is 60% - 100%, then let them go through MIR's different placement and with some slight delay in each part, the sound must be more perfect than my experiment.

    And I think if this is correct, then may be MIR can have some built-in function for string ensemble. 

    Still, I only did some rough experiments and logically thought about this idea. It may be not correct, but actually, it's fun to compare the difference between digital simulations and the reality. And it is these difference makes the reality real.


    And sorry for my poor English :)


    (or may be I should use my new name Hikari?)

  • Thanks for posting your thoughts, YWT/Hikari.

    The "Pictures At An Exhibition" are several years old and were have to be understood as a concept study. Instruments were convolved one by one with a few dedictated IRs for each of them, never hearing the complete picture (pun intended ;-) ...). I was happy when the computer I used back than handled 12 IRs in realtime. By _no means_ this showed the full potential of Vienna MIR as we are testing it right now:

    Today we are in the position to hear the equivalent of several hundred (or maybe even thousand) individual IRs, plus the correct directivity prefiltering for each of 50 and more instruments, and some other unique features.

    Listen to some more recent MIR demos which are available [URL=]here[/URL] (in surround, if you have the possibilty). While they are outdated again by the newest Beta versions of MIR we are testing right now, they should give you an impression on how things developed during the years.

    Thanks for your interest!

    /Dietz - Vienna Symphonic Library
  • Thanks Dietz for your reply!

    Actually, I heard two new demo out there before I post. It's much better than the older demo. But still, the string ensemble's problem remains. Although it's a lot better than one single IR convolution on ensemble!

    My point is that using one IR to convolve an ensemble dry sound is far to reality. I compared the new demos with some recordings, with my ear and with my eyes(by comparing spectrogram). There are obvious difference. Real recording's violins enesemble sound's energy is speard wider on frequency spectrum. while VI's sound is a little narrower.

    I have another question regarding the demo "Wondering Why". Did this demo use Appassionata Strings? I listened to a lot of demos of Appassionata Strings. And I believe this is the lush sound I like. But I'm still wondering why 14 violins will not sound good enough?

    No offense to the great library. VI and VSL is the BEST orchestral library out there and its sound is far better than its competitors. I just want to find a way to make the music built by this library more realistic. Just because VSL is the closest to the reality.

    When I can I will post some spectrogram pictures to help the discussion.

    Again I will definitely buy MIR if I can. It is absolutely a must have!

    All the best!


  • IMO there is one patch missing from the 14 Violins. Molto vibrato. Using Cell xFade together with the Perf leg patches would be fantastic. I'm sure that it is too late to do any more recordings, but I really would have liked a 14 Violin sus molto vib FA auto.


  • I totally agree with you and you may find some solution there :

    I'm a big fan of VSL...even more than that...I'm pretty sure they won't let anyone go beyond them for too much fo time ;)

    The way I make my strings sound better is to stack Chambers, orchestras, solos and sometimes appasionnatas with different gives the feel of several players much more then 1 perfect player

  • Great patch idea, DG - having the control as stated would be terrific and breath necessary life into our sequences.

    Ramu - I do the same but sometimes have a thinning or phasing effect if too many are combined.  I generally combined the solo and one of the esemble patches.  Have you found another combination that provides positive results?

    Wish I had a TON more pfp patches in all the string libraries.  But being able to 'create' this with xfade /. vibrato would be even better (more flexible).


  • I generally combine the app with chamber and then throw in occasional patches from the regular strings - when I need something specific that I just can't get covered with the other two. It can be a little tricky with getting them to sit nicely if you combine too much. 

    I agree with DG, that would be very handy to have. 

  • The answers to this original post are not pertinent.  I do not have one either but think it is an interesting experiment to try something like this.  The original question and statement are important because he is talking about something that happens with sampled string sound in general and is the single biggest problem with sample performance of orchestral music.  Thin strings -usually violins - in an orchestral context, especially a heavy or large orchestral context, and it is a basic acoustic problem, not something addressed by tricks like layering or fading in heavier vibrato.   Also, the idea of layering Orchestral strings on top of Appassionata to me is a distortion.  How many players does that make?  About 60 violins?  20 violas?  20 cellos?  If you are going to reproduce Berlioz conducting Symphonie Fantastique with a sword it would be great, but piling on layers of the same type of sample to get a single decent sound is like blending whiskey.  It will never be as good as single malt.  Not that I can drink that anymore...

  •  Maybe I should change my original post in order to let people know I've already listened to the new demo?

    MIR is simply fantastic. The most important thing with MIR is not only its quality. To me, the most important thing is, as in a programmer's view(I also do many programming things with Delphi and C++ on Windows), MIR is like Visual Studio to the programmer's world(well i think programmers here will understand my point immdiately:). It simplifies the process to adjust complex parameters, yet can provide great and realistic results on reverb. So that is why I define MIR as "ENVIRONMENTAL REVERB", not a simple reverb plugin or software. It takes _environment_ and _real life_ parameters like position, direction as its parameter. Actually I am wishing to see a software like this years ago, when I am trying to figure out the parameters on reverb plugins.

    What I am talking about here, as William says, is a common problem with sampled strings. Stereo sample, convolved by one IR, is simply not what it happens in the reality. In reality, string ensemble should be seen as convolved by many different position's IR. These small yet identical differences make a string ensemble sound. This can't be overcomed by sequencing technique or mixing tricks. Because we are simply not yet close to the reality. Just as Science have to be close to reality if it want to simulate and foreseen reality. Each time science comes closer, the result would be more percise, yet more realistic.

    Actually, When I get my hand on MIR, I will simply start my experiment with sampled strings, as MIR comes with enough different IRs on stage's different position. Think about how a real string ensemble sound is reverbed(convolved) by the environment around it, and try to simulate this using software, and _try_ to get a more realistic sound. That is what I am talking about.

    No creativity, no life.

    And, Question everything, then you can come closer to reality.

    All the best!


  • I think that idea - that a stereo image being convoluted is wrong - may be true partly because I noticed once the difference between applying reverb to twelve different individual tracks instead of a few premixed combinations of the same tracks.   The individually convoluted version sounded much better, perhaps because of what you are talking about here. That in reality a big wall of dry sound getting reverb does not happen.  Though I remember that Dietz said he does not think there really is a difference.  I admit I never A/B'd the two versions.  So perhaps I was hallucinating.  That is always a possibility.

  • Hi William,

    I can't remember the context in detail, but I'm pretty sure I meant to say that it doesn't make much difference - as long as you simply apply the same IR to either the single signals or a mixture of them.

    Of course, it makes a big difference if the individual IR's you apply are _not_ the same, but (for example) position-dependant like in the case of MIR.

    To take this into account, stereo signals are convoluted individually for their left and right part in MIR, with the respecive IRs which the left and the right border of an ensemble's Icon invoke.

    /Dietz - Vienna Symphonic Library