Vienna Symphonic Library Forum
Forum Statistics

182,584 users have contributed to 42,242 threads and 254,859 posts.

In the past 24 hours, we have 5 new thread(s), 21 new post(s) and 41 new user(s).

  • To me, the MIR sound is not flat at all, but far more dimensional than algorhythmic.  It really does what it is said to do - placing the instrument within a space.  I do have Lexicon reverb that is very good but the Lexicon does not have that sense of reality and simply adds a generic reverb.   Also, what was said about having to do complicated mixing tweaks with MIR to sound right is not true.  It does give you an instant mix that is incredible just by dragging the instruments onto the chosen stage.  You can play around with that if you want, but right out of the box it is a great sound. 

    I was comparing MIR to Altiverb the other day.  While Altiverb sounds  good,  MIR is like about 100 Altiverbs in one piece of software for any given stage.  You have to be able to position multiple sound sources, without changing the microphone, in order to really use the effect of convolution which is exactly what MIR does.


  • last edited
    last edited

    @Dietz said:

    Jack Weaver is pointing to a French product which actually could be founded on a concept I outlined eight years ago -> Longcat's "Audio Stage" . The underlying idea and even some parts of the GUI are much like my plans for a Post-Pro MIR, but without IRs, because it's based on virtual room models. I'm very fond of the basic idea, but I have to admit that I wasn't convinced by the acoustic results at all, at least in a musical context (... I just tried the demo).

    just a short note to officially claim we (Longcat) didn't steal Dietz's brain just after he outlined that Post-Pro MIR concept eight years ago 😛

    A lot of labs were working on the same sound-object concepts in the 90's (IRCAM in France, MIT in the US, etc), or even in the 80's. And everything took place in the Virtual Reality worlds, or (even better) in the game audio technologies.

    I cannot tell how glad we are to see some similar approaches coming next to our AudioStage. The more audio-objects paradigms are used, the happier we are, as we see it as a confirmation of our thoughts.

    All the best for the upcoming Pro-MIR, -- Benjamin / Longcat Audio

  • William: each to their own, that goes without saying in a discussion like this. As I said, I have conflicted thoughts about it, but there is something about convolution that grates with me.

    I wouldn't compare a lexicon with a bricasti or a quantec, they have another degree of realism, and they acknowledge in their design the fact that listening through amplifiers / speakers is an imperfect situation, so they compensate for that, if that makes sense. Trying to sound real through speakers, is never going to work, in my opinion. In other words, in an imperfect world, it is the aesthetic, or the perception, that is most important. This is where I feel convolution fails terribly. The idea of sampling a room is admirable in theory, but, well simply put as I said before, in my mind I would imagine some kind of merging of the two would be the best situation, or, starting with a sample, and modelling it from scratch. To me a mix through a hardware reverb sounds more life like and realistic to the end listener who is perhaps not an engineer or producer, than anything done with convolution. Don't get me wrong, I'm not ignoring the amazing achievements in MIR, the sounds placement effects is stunning, but convolution process lets the whole thing down for me, if only they could use another process I think the idea and the engineering is near perfect.


  • last edited
    last edited

    @Dietz said:

    Jack Weaver is pointing to a French product which actually could be founded on a concept I outlined eight years ago -> Longcat's "Audio Stage" . The underlying idea and even some parts of the GUI are much like my plans for a Post-Pro MIR, but without IRs, because it's based on virtual room models. I'm very fond of the basic idea, but I have to admit that I wasn't convinced by the acoustic results at all, at least in a musical context (... I just tried the demo).

    just a short note to officially claim we (Longcat) didn't steal Dietz's brain just after he outlined that Post-Pro MIR concept eight years ago 😛

    A lot of labs were working on the same sound-object concepts in the 90's (IRCAM in France, MIT in the US, etc), or even in the 80's. And everything took place in the Virtual Reality worlds, or (even better) in the game audio technologies.

    I cannot tell how glad we are to see some similar approaches coming next to our AudioStage. The more audio-objects paradigms are used, the happier we are, as we see it as a confirmation of our thoughts.

    All the best for the upcoming Pro-MIR,

    -- Benjamin / Longcat Audio

    Welcome and thanks for your message, Benjamin. I think the essence of my reply to Jack Weaver's posting got somewhat lost in translation. What I meant to say is that I dig the concept a lot, which is not astonishing given the fact that I already sketched a similar idea in the early days of the MIR development (although obviously based on IRs, not on algorthmically generated virtual rooms). Personally I hope that you revolutionize the post-pro market with AudioStage.

    You are also right that more and more object-based approaches to mixing are appearing on the market these days - like Iosono's "Upmix", for example. Time for us to teach all audio the people out there that faders, pan-pots and "reverbs" are just _one_ possibilty to work with multiple signal sources. 😉

    Kind regards,


    /Dietz - Vienna Symphonic Library
  • I think a lot of the criticism for the MIR approach would disappear if users would recognize that the software was designed for a very fast chip set which only exist of the PC side of the computer world (Dual Intel QUAD core Xeon X5560 - 2.8 GHZ  8MB cach , 6.4 GT/s Processors) or better. 

    If you live in the USA call VisionDAW systems and talk to Mark Nagata, this is also on the VSL website ( no I am not a sales rep. for VisionDAW)

    be prepared to spend in the vicinity of $8,000 US for the system. IT WILL CHANGE YOUR COMPOSITION LIFE !!!!!!!!  and the sound of your finished recordings.

    Regards,

    Stephen W. Beatty 


  • I'm sure it would... Personally I'm not criticising the MIR approach at all, in fact I'm praising it, I think it's fantastic, ground breaking, genius, superb. What I don't like is the sound of convolution reverb, in any setting, even MIR. I think if they take the MIR model, and change it so either the impulses are modulated crudely speaking so that they are not static but merely a starting point, OR model algorithms on the impulses, then it would be (with the VSL team behind it) perfect.


  • last edited
    last edited

    @Stephen W. Beatty said:

    [...] be prepared to spend in the vicinity of $8,000 US for the system. [...]

    It is true that MIR eats CPU cycles for breakfast 😉 .... but to keep things in perspective: I've seen offers for well performing machines for 4.000,- Euro and less.

    Kind regards,


    /Dietz - Vienna Symphonic Library
  • I am using MIR SE on an i7 920 with 24 GB of Crucial ram (the cheapest fairly good RAM I could determine) that cost around $1,200  and it works fantastically.  Though I am a little conservative with ensemble sizes as I don't want to enrage Zeus or Thor who both collaborated with VSL on the software.


  • last edited
    last edited

    @mpower88 said:

    I'm sure it would... Personally I'm not criticising the MIR approach at all, in fact I'm praising it, I think it's fantastic, ground breaking, genius, superb. What I don't like is the sound of convolution reverb, in any setting, even MIR. I think if they take the MIR model, and change it so either the impulses are modulated crudely speaking so that they are not static but merely a starting point, OR model algorithms on the impulses, then it would be (with the VSL team behind it) perfect.

     

    I wonder if the new Hybrid Reverb will do what you're wanting. I think that Audioease is also working on something along these lines.

    DG