Hi DG, that's similar to what my thinking was, however I found that as soon as you put any signal through an impulse response, it sucks the life out of it. especially with medium to larger mixes.
-
Interesting topic indeed! What I think is a major missconception of MIR is that you just put your Intruments in the room where you think they belong and except everything to sound great. To be honest: when MIR was anounced I also had high hopes that I wouldn't hve to care for mixing that much any more in the future. I guess it will always take experience and skill to do a great mix, no matter which tools you might have ...
In my opinion, the problem is that most people use samples for a filmmusic kind of sound (even if it might not always be filmmusic they compose). When you use the Vienna Konzerthaus main stage it sounds more like a classical recording. What DG wrote is exactly, what they do on a filmmusic recording most of the time: They record on a rather small stage (compared to a concert stage) and do plenty of manipulation, especially reverb, in the mix.
I am curious, what MIR pro will do in that regard. I hope it to be more a "creative stagesound manipulation tool", where you can tweak each instrument the way you whant it to sound like. E.g., it may be realistic in theory, that every instrument sits in the same recording room, but it is not necessarily the best result for the sound you are after. Maybe you want the brass to be in the konzerthaus, but for the Strings you prefer a small stage with additional reverb to compensate, etc. ...
-
@mpower88 said:
Hi DG, that's similar to what my thinking was, however I found that as soon as you put any signal through an impulse response, it sucks the life out of it. especially with medium to larger mixes.
In my experience the times that this has happened to me was when the original performance wasn't that great to start with. However, as far as the algorithmic reverb is concerned I use this as a send from the original audio files, not after a convolution insert. I hope that this makes sense. [:$]
DG
-
What I think is a major missconception of MIR is that you just put your Intruments in the room where you think they belong and except everything to sound great. To be honest: when MIR was anounced I also had high hopes that I wouldn't hve to care for mixing that much any more in the future. I guess it will always take experience and skill to do a great mix, no matter which tools you might have ...
To a large degree I think MIR can be 'set & forget'. However a lot of us like to expand our spatial/reverberant techniques into what we're hearing in our minds for the mix at hand. Beside, we like playing with things. With the advent of MIR PRo there will be a ton of things we can do creatively with stems and technical tricks.
I do like mpower88's ideas for future expansion of MIR.
.
-
In this regard I was secretly hoping that when MIR Pro comes out that I will still have a regular MIR license and having 2 machines will be able to install MIR on one and MIR Pro on the other to be able to get 2 MIR-type venues simultaneously. I hadn't run this concept past Dietz yet but on the surface I can't see anything that might stop this from being possible.
I guess that this will depends on whether or not MIR Pro is an insert. If it is then there will be no reason not to be able to run a different venue per insert.
However, I would imagine that in your case you would need 2 licences to run on two machines, just as you currently do with VE Pro.
DG
-
DG: your idea of running the algorithm reverb as a send makes sense but aren't you messing up the placement. Sorry I'm not very familiar with MIR's actual operation... maybe I'm getting this wrong. In terms of the mixes, I was referring to quite good mixes - a couple by Dietz I was listening to from the demo zone, compared with some older mixes from others that sound like they have not used convolution reverb, and comparing those with live recordings from film scores.
-
You could be messing up the placement; exactly as you could when using a multi mike set-up in the real world. So, treat it the same. MIR is the overhead mikes, audio track sends are the sends from the close mikes. There is also no reason that you couldn't use a send from the o/h (MIR) as well. You just have to pan the audio tracks manually for the send, and not for the actual audio track (and this is crucial, or you would mess up the MIR placement)..
DG
-
@mpower88 said:
Yeah, makes sense in the real world, but how does it sound in the virtual world?
Using the current MIR loading audio files in Kontakt and then running them in parallel for he audio track sends, it sounds pretty good. Obviously this is no way to work for real, but at least I know what will be possible when MIR Pro is released.
DG
-
@mpower88 said:
Got any examples we can listen to??
Unfortunately not at the moment, because I'm in the middle of mixing an album, but I'll try to do a couple of single instrument examples in a couple of week. Meanwhile feel free to post an example where you feel that convolution has destroyed the performance. A before and after would be really useful to hear.
DG
-
To me, the MIR sound is not flat at all, but far more dimensional than algorhythmic. It really does what it is said to do - placing the instrument within a space. I do have Lexicon reverb that is very good but the Lexicon does not have that sense of reality and simply adds a generic reverb. Also, what was said about having to do complicated mixing tweaks with MIR to sound right is not true. It does give you an instant mix that is incredible just by dragging the instruments onto the chosen stage. You can play around with that if you want, but right out of the box it is a great sound.
I was comparing MIR to Altiverb the other day. While Altiverb sounds good, MIR is like about 100 Altiverbs in one piece of software for any given stage. You have to be able to position multiple sound sources, without changing the microphone, in order to really use the effect of convolution which is exactly what MIR does.
-
@Dietz said:
Jack Weaver is pointing to a French product which actually could be founded on a concept I outlined eight years ago -> Longcat's "Audio Stage" . The underlying idea and even some parts of the GUI are much like my plans for a Post-Pro MIR, but without IRs, because it's based on virtual room models. I'm very fond of the basic idea, but I have to admit that I wasn't convinced by the acoustic results at all, at least in a musical context (... I just tried the demo).
just a short note to officially claim we (Longcat) didn't steal Dietz's brain just after he outlined that Post-Pro MIR concept eight years ago 😛
A lot of labs were working on the same sound-object concepts in the 90's (IRCAM in France, MIT in the US, etc), or even in the 80's. And everything took place in the Virtual Reality worlds, or (even better) in the game audio technologies.
I cannot tell how glad we are to see some similar approaches coming next to our AudioStage. The more audio-objects paradigms are used, the happier we are, as we see it as a confirmation of our thoughts.
All the best for the upcoming Pro-MIR, -- Benjamin / Longcat Audio
-
William: each to their own, that goes without saying in a discussion like this. As I said, I have conflicted thoughts about it, but there is something about convolution that grates with me.
I wouldn't compare a lexicon with a bricasti or a quantec, they have another degree of realism, and they acknowledge in their design the fact that listening through amplifiers / speakers is an imperfect situation, so they compensate for that, if that makes sense. Trying to sound real through speakers, is never going to work, in my opinion. In other words, in an imperfect world, it is the aesthetic, or the perception, that is most important. This is where I feel convolution fails terribly. The idea of sampling a room is admirable in theory, but, well simply put as I said before, in my mind I would imagine some kind of merging of the two would be the best situation, or, starting with a sample, and modelling it from scratch. To me a mix through a hardware reverb sounds more life like and realistic to the end listener who is perhaps not an engineer or producer, than anything done with convolution. Don't get me wrong, I'm not ignoring the amazing achievements in MIR, the sounds placement effects is stunning, but convolution process lets the whole thing down for me, if only they could use another process I think the idea and the engineering is near perfect.
-
@Dietz said:
Jack Weaver is pointing to a French product which actually could be founded on a concept I outlined eight years ago -> Longcat's "Audio Stage" . The underlying idea and even some parts of the GUI are much like my plans for a Post-Pro MIR, but without IRs, because it's based on virtual room models. I'm very fond of the basic idea, but I have to admit that I wasn't convinced by the acoustic results at all, at least in a musical context (... I just tried the demo).
just a short note to officially claim we (Longcat) didn't steal Dietz's brain just after he outlined that Post-Pro MIR concept eight years ago 😛
A lot of labs were working on the same sound-object concepts in the 90's (IRCAM in France, MIT in the US, etc), or even in the 80's. And everything took place in the Virtual Reality worlds, or (even better) in the game audio technologies.
I cannot tell how glad we are to see some similar approaches coming next to our AudioStage. The more audio-objects paradigms are used, the happier we are, as we see it as a confirmation of our thoughts.
All the best for the upcoming Pro-MIR,
-- Benjamin / Longcat Audio
Welcome and thanks for your message, Benjamin. I think the essence of my reply to Jack Weaver's posting got somewhat lost in translation. What I meant to say is that I dig the concept a lot, which is not astonishing given the fact that I already sketched a similar idea in the early days of the MIR development (although obviously based on IRs, not on algorthmically generated virtual rooms). Personally I hope that you revolutionize the post-pro market with AudioStage.
You are also right that more and more object-based approaches to mixing are appearing on the market these days - like Iosono's "Upmix", for example. Time for us to teach all audio the people out there that faders, pan-pots and "reverbs" are just _one_ possibilty to work with multiple signal sources. 😉
Kind regards,
/Dietz - Vienna Symphonic Library -
I think a lot of the criticism for the MIR approach would disappear if users would recognize that the software was designed for a very fast chip set which only exist of the PC side of the computer world (Dual Intel QUAD core Xeon X5560 - 2.8 GHZ 8MB cach , 6.4 GT/s Processors) or better.
If you live in the USA call VisionDAW systems and talk to Mark Nagata, this is also on the VSL website ( no I am not a sales rep. for VisionDAW)
be prepared to spend in the vicinity of $8,000 US for the system. IT WILL CHANGE YOUR COMPOSITION LIFE !!!!!!!! and the sound of your finished recordings.
Regards,
Stephen W. Beatty
-
I'm sure it would... Personally I'm not criticising the MIR approach at all, in fact I'm praising it, I think it's fantastic, ground breaking, genius, superb. What I don't like is the sound of convolution reverb, in any setting, even MIR. I think if they take the MIR model, and change it so either the impulses are modulated crudely speaking so that they are not static but merely a starting point, OR model algorithms on the impulses, then it would be (with the VSL team behind it) perfect.
-
@Stephen W. Beatty said:
[...] be prepared to spend in the vicinity of $8,000 US for the system. [...]It is true that MIR eats CPU cycles for breakfast 😉 .... but to keep things in perspective: I've seen offers for well performing machines for 4.000,- Euro and less.
Kind regards,
/Dietz - Vienna Symphonic Library