Vienna Symphonic Library Forum
Forum Statistics

194,283 users have contributed to 42,914 threads and 257,948 posts.

In the past 24 hours, we have 2 new thread(s), 17 new post(s) and 86 new user(s).

  • last edited
    last edited

    @mpower88 said:

    [...]

    then again perhaps the engineers will think "go to hell!!!" haha.  

    Go to hell! ;-D

    No - but seriously: I appreciate the thoughts you devote to MIR, and rest assured that we have lots of new ideas and approaches up our sleeves. The topic "spatialization and reverb" is nothing VSL will leave alone for quite a few years to come, that's for sure.

    That said I (unsurprisingly) have a hard time to follow your rationale why MIR fails so miserably, at least for your ears. Maybe it is a question of expectations. Real rooms rarely sound "pretty", and I perfectly understand that the raw realism of MIR venues is sometimes hard to take. (The same is true with VSL instrument samples, but that's a different story 😉 ...) To say that algorithmic reverb superseeds convolution-based reverbs is the exact opposite of my listening experiences of the last 25 years, though.

    Jack Weaver is pointing to a French product which actually could be founded on a concept I outlined eight years ago -> Longcat's "Audio Stage" . The underlying idea and even some parts of the GUI are much like my plans for a Post-Pro MIR, but without IRs, because it's based on virtual room models. I'm very fond of the basic idea, but I have to admit that I wasn't convinced by the acoustic results at all, at least in a musical context (... I just tried the demo).

    Kind regards (and _now_ go to hell  ;-)) ...),


    /Dietz - Vienna Symphonic Library
  • Hi Dietz, Thanks, I did invite that didn't I ;)

    To your points: I'm glad to hear the topic is not closed! That's exciting to hear for anyone who knows the history of your company.

    I did not say MIR fails miserably to my ears! Quite the contrary it is an amazing success. Rather it solves many problems while not solving some that I had perhaps unconsciously hoped would be solved. Perhaps what I'm trying to say is I'm hoping for the next generation already.

    Simply put, I'm wondering why one couldn't with future technology apply algorithms to the impulse responses you already have, to give them more life, variance, and breath. To me, impulses always sound so flat. Although MIR sounds more real, other mixes have more "life" to them, to my ears. It's a difficult thing to articulate really because at the same time I am genuinely amazed at the realism that MIR generates. I hope this makes sense!


  • This is a very interesting topic, To my ears MIR works very well for some instruments and not so well for others. However, all instruments require an amount of tweaking in order to work in the mix; there is no "one size fits all".

    However, whilst I quite like a bit of algorithmic reverb in the mix, I really don't like the sound without some sort of "placement in a room" as well. For example I like the smaller, studio sounds in MIR together with an extra tail for LEX or Bricasti. I know that this is against purist views of how to do things, but as all recording is affected by psychoacoustics, I don't care. [6]

    DG


  • Hi DG, that's similar to what my thinking was, however I found that as soon as you put any signal through an impulse response, it sucks the life out of it. especially with medium to larger mixes.


  • Interesting topic indeed! What I think is a major missconception of MIR is that you just put your Intruments in the room where you think they belong and except everything to sound great. To be honest: when MIR was anounced I also had high hopes that I wouldn't hve to care for mixing that much any more in the future. I guess it will always take experience and skill to do a great mix, no matter which tools you might have ...

    In my opinion, the problem is that most people use samples for a filmmusic kind of sound (even if it might not always be filmmusic they compose). When you use the Vienna Konzerthaus main stage it sounds more like a classical recording. What DG wrote is exactly, what they do on a filmmusic recording most of the time: They record on a rather small stage (compared to a concert stage) and do plenty of manipulation, especially reverb, in the mix.

    I am curious, what MIR pro will do in that regard. I hope it to be more a "creative stagesound manipulation tool", where you can tweak each instrument the way you whant it to sound like. E.g., it may be realistic in theory, that every instrument sits in the same recording room, but it is not necessarily the best result for the sound you are after. Maybe you want the brass to be in the konzerthaus, but for the Strings you prefer a small stage with additional reverb to compensate, etc. ...


  • last edited
    last edited

    @mpower88 said:

    Hi DG, that's similar to what my thinking was, however I found that as soon as you put any signal through an impulse response, it sucks the life out of it. especially with medium to larger mixes.

     

    In my experience the times that this has happened to me was when the original performance wasn't that great to start with. However, as far as the algorithmic reverb is concerned I use this as a send from the original audio files, not after a convolution insert. I hope that this makes sense. [:$]

    DG


  • last edited
    last edited

    @Another User said:

    What I think is a major missconception of MIR is that you just put your Intruments in the room where you think they belong and except everything to sound great. To be honest: when MIR was anounced I also had high hopes that I wouldn't hve to care for mixing that much any more in the future. I guess it will always take experience and skill to do a great mix, no matter which tools you might have ...

    To a large degree I think MIR can be 'set & forget'. However a lot of us like to expand our spatial/reverberant techniques into what we're hearing in our minds for the mix at hand. Beside, we like playing with things. With the advent of MIR PRo there will be a ton of things we can do creatively with stems and technical tricks. 

    I do like mpower88's ideas for future expansion of MIR.

    .


  • last edited
    last edited

    @Jack Weaver said:

    In this regard I was secretly hoping that when MIR Pro comes out that I will still have a regular MIR license and having 2 machines will be able to install MIR on one and MIR Pro on the other to be able to get 2 MIR-type venues simultaneously.  I hadn't run this concept past Dietz yet but on the surface I can't see anything that might stop this from being possible. 

     

    I guess that this will depends on whether or not MIR Pro is an insert. If it is then there will be no reason not to be able to run a different venue per insert.

    However, I would imagine that in your case you would need 2 licences to run on two machines, just as you currently do with VE Pro.

    DG


  • DG: your idea of running the algorithm reverb as a send makes sense but aren't you messing up the placement. Sorry I'm not very familiar with MIR's actual operation... maybe I'm getting this wrong. In terms of the mixes, I was referring to quite good mixes - a couple by Dietz I was listening to from the demo zone, compared with some older mixes from others that sound like they have not used convolution reverb, and comparing those with live recordings from film scores.


  • You could be messing up the placement; exactly as you could when using a multi mike set-up in the real world. So, treat it the same. MIR is the overhead mikes, audio track sends are the sends from the close mikes. There is also no reason that you couldn't use a send from the o/h (MIR) as well. You just have to pan the audio tracks manually for the send, and not for the actual audio track (and this is crucial, or you would mess up the MIR placement)..

    DG


  • Yeah, makes sense in the real world, but how does it sound in the virtual world?


  • last edited
    last edited

    @mpower88 said:

    Yeah, makes sense in the real world, but how does it sound in the virtual world?

     

    Using the current MIR loading audio files in Kontakt and then running them in parallel for he audio track sends, it sounds pretty good. Obviously this is no way to work for real, but at least I know what will be possible when MIR Pro is released.

    DG


  • Got any examples we can listen to??


  • An easy way to do check this yourself is to do a "Dry Only"-version of your piece once you're ready. There's a dedicated button in MIR's Output Channel. Send this to an algorithmic reverb of your choice. Done. :-)

    Kind regards,


    /Dietz - Vienna Symphonic Library
  • last edited
    last edited

    @mpower88 said:

    Got any examples we can listen to??

    Unfortunately not at the moment, because I'm in the middle of mixing an album, but I'll try to do a couple of single instrument examples in a couple of week. Meanwhile feel free to post an example where you feel that convolution has destroyed the performance. A before and after would be really useful to hear.

    DG


  • Hi Dietz, I don't have MIR, and I'm on a Mac only set up right now... although I was considering a dual boot system... but for now I'll wait and see. MIR pro looks like a mac version might be coming? If so, I'll look forward to that and what might be possible.


  • To me, the MIR sound is not flat at all, but far more dimensional than algorhythmic.  It really does what it is said to do - placing the instrument within a space.  I do have Lexicon reverb that is very good but the Lexicon does not have that sense of reality and simply adds a generic reverb.   Also, what was said about having to do complicated mixing tweaks with MIR to sound right is not true.  It does give you an instant mix that is incredible just by dragging the instruments onto the chosen stage.  You can play around with that if you want, but right out of the box it is a great sound. 

    I was comparing MIR to Altiverb the other day.  While Altiverb sounds  good,  MIR is like about 100 Altiverbs in one piece of software for any given stage.  You have to be able to position multiple sound sources, without changing the microphone, in order to really use the effect of convolution which is exactly what MIR does.


  • last edited
    last edited

    @Dietz said:

    Jack Weaver is pointing to a French product which actually could be founded on a concept I outlined eight years ago -> Longcat's "Audio Stage" . The underlying idea and even some parts of the GUI are much like my plans for a Post-Pro MIR, but without IRs, because it's based on virtual room models. I'm very fond of the basic idea, but I have to admit that I wasn't convinced by the acoustic results at all, at least in a musical context (... I just tried the demo).

    just a short note to officially claim we (Longcat) didn't steal Dietz's brain just after he outlined that Post-Pro MIR concept eight years ago 😛

    A lot of labs were working on the same sound-object concepts in the 90's (IRCAM in France, MIT in the US, etc), or even in the 80's. And everything took place in the Virtual Reality worlds, or (even better) in the game audio technologies.

    I cannot tell how glad we are to see some similar approaches coming next to our AudioStage. The more audio-objects paradigms are used, the happier we are, as we see it as a confirmation of our thoughts.

    All the best for the upcoming Pro-MIR, -- Benjamin / Longcat Audio

  • William: each to their own, that goes without saying in a discussion like this. As I said, I have conflicted thoughts about it, but there is something about convolution that grates with me.

    I wouldn't compare a lexicon with a bricasti or a quantec, they have another degree of realism, and they acknowledge in their design the fact that listening through amplifiers / speakers is an imperfect situation, so they compensate for that, if that makes sense. Trying to sound real through speakers, is never going to work, in my opinion. In other words, in an imperfect world, it is the aesthetic, or the perception, that is most important. This is where I feel convolution fails terribly. The idea of sampling a room is admirable in theory, but, well simply put as I said before, in my mind I would imagine some kind of merging of the two would be the best situation, or, starting with a sample, and modelling it from scratch. To me a mix through a hardware reverb sounds more life like and realistic to the end listener who is perhaps not an engineer or producer, than anything done with convolution. Don't get me wrong, I'm not ignoring the amazing achievements in MIR, the sounds placement effects is stunning, but convolution process lets the whole thing down for me, if only they could use another process I think the idea and the engineering is near perfect.


  • last edited
    last edited

    @Dietz said:

    Jack Weaver is pointing to a French product which actually could be founded on a concept I outlined eight years ago -> Longcat's "Audio Stage" . The underlying idea and even some parts of the GUI are much like my plans for a Post-Pro MIR, but without IRs, because it's based on virtual room models. I'm very fond of the basic idea, but I have to admit that I wasn't convinced by the acoustic results at all, at least in a musical context (... I just tried the demo).

    just a short note to officially claim we (Longcat) didn't steal Dietz's brain just after he outlined that Post-Pro MIR concept eight years ago 😛

    A lot of labs were working on the same sound-object concepts in the 90's (IRCAM in France, MIT in the US, etc), or even in the 80's. And everything took place in the Virtual Reality worlds, or (even better) in the game audio technologies.

    I cannot tell how glad we are to see some similar approaches coming next to our AudioStage. The more audio-objects paradigms are used, the happier we are, as we see it as a confirmation of our thoughts.

    All the best for the upcoming Pro-MIR,

    -- Benjamin / Longcat Audio

    Welcome and thanks for your message, Benjamin. I think the essence of my reply to Jack Weaver's posting got somewhat lost in translation. What I meant to say is that I dig the concept a lot, which is not astonishing given the fact that I already sketched a similar idea in the early days of the MIR development (although obviously based on IRs, not on algorthmically generated virtual rooms). Personally I hope that you revolutionize the post-pro market with AudioStage.

    You are also right that more and more object-based approaches to mixing are appearing on the market these days - like Iosono's "Upmix", for example. Time for us to teach all audio the people out there that faders, pan-pots and "reverbs" are just _one_ possibilty to work with multiple signal sources. 😉

    Kind regards,


    /Dietz - Vienna Symphonic Library