Hello Dietz,
Following the announcement of Synchron Duality strings, would you say it doesn't make much sense to use those with MIR Pro?
194,093 users have contributed to 42,911 threads and 257,915 posts.
In the past 24 hours, we have 6 new thread(s), 23 new post(s) and 82 new user(s).
Yeah i would also like to know if the Synchron Duality Strings make scense together with MIR Pro 3d? π
cheers janosch
@jsfotografie said:
Yeah i would also like to know if the Synchron Duality Strings make scense together with MIR Pro 3d? π
cheers janosch
No. Synchron Duality Strings comes with multiple mics and was recorded with Synchron Stage sound in the signal. There is no need for spatialization, since it's already included in the signals.
You should use the Synchron Player mixer options to adapt the sound to your liking instead.
As Dietz says, you can use MIR with Synchron products - I quite often do - but there are various caveats. You need the sound as dry as possible, to turn off Dry Signal Handling, etc. It depends on the sound you're after - again, if you're after that big concert hall/scoring stage sound it's pointless, you're better off with the baked-in Synchron Stage sound, but there's a lot of other really interesting stuff you can do.
One issue I have with the Synchron stuff is the panning. Because I'm not making classical music, a lot of the time I don't want a conventional classical orchestral layout. For instance, with the kind of stuff I do, having both violin sections on the same side of the stereo field can cause balance problems and I often want them either side, with violas and/or celli in the middle. I'd be interested in any thoughts on how best to go about that when using the actual Synchron sound rather than MIR.
A lot of the time I'm using a combination of Synchron and VI and/or Synchron-ised libraries. Both of the latter always go via MIR, with the former it's horses for courses.
I'm experimenting with binaural mixing of Duality Strings and already getting some fabulous results.
The Stage B mics are for many cases just begging to be spatialised, which can readily be achieved either binaurally or with MIR 3D. And I've previously achieved excellent binaural spatialisation with Synchron Stage A sections in other Synchron libraries. But let's also not forget those cases where Stage B mics can do a superb and special job just as they are, when mixed with Stage A mics.
I've tried MIR 3D with Synchron but my strong preference now is binaural localisation and algorithmic reverb in every case. Each binauraliser (I currently use DearVR Pro with its internal reverb and reflections switched off) is followed by an algorithmic reverb plugin, then a Mid-Side controller (such as Voxengo's free MSED) for reverb sector-width adjustment. Note that some of the Stage B mics have already been panned by VSL (double clicking the power-panner in the appropriate Synchron Mix Channel reveals the Balance control).
For divisi purposes I'll typically use 2 Synchron Players per Stage B section, and this also gives me the opportunity to produce 2 separate mono-to-binaural stereo channels, each using different Humanisation delay and tuning presets. These two channels are then processed separately to produce 2 separate or 1 wide combined, wonderfully believable binaural images on the stage of a credible auditorium, for each 'pseudo half' of each B section. As I've said elsewhere, Duality Strings really come alive so readily and easily straight out of the box. Then with my own binaural imaging and algoverbs, the results are the best I've ever produced from any instrument library. Highly recommend this approach to all.
That said, I don't do multi-speaker surround mixes, nor do I intend to. But I guess that's where MIR 3D serves best (unless your contract requires Dolby 3D).
@Macker said:
Each binauraliser (I currently use DearVR Pro with its internal reverb and reflections switched off) is followed by an algorithmic reverb plugin, then a Mid-Side controller (such as Voxengo's free MSED) for reverb sector-width adjustment.
Kind of off-label use, isn't it ...? π
The whole idea of a binauralisation device is to encode the final mix. If you put non-binauralised reverb on already binauralised content, and additional M/S-processing on top of it, the results will be - uhm - interesting ...
@Macker said:
The Stage B mics are for many cases just begging to be spatialised, which can readily be achieved either binaurally or with MIR 3D.
Actually, the use of MIR 3D and a binauraliser isn't a question of "either/or", but a case of "one after the other". The binauralisation process only serves to translate a mix for listening via headphones, no matter whether it is stereo, surround or 3D.
Well yes, Dietz, I'd accept that my post above is somewhat "off-label" for this particular forum - apologies.
But "off-label" in terms of practical audio engineering? Well, maybe, if we're to be super-pedantic. The fact is though, my approach works. The results I'm getting are engaging and credible. Why not try it out for yourself in practice, instead of trying to pre-judge it? Eventually I might get around to producing an mp3 demo if I find some spare time.
I'm curious about your statement: "the whole idea of a binauralisation device is to encode the final mix". Well that's not exactly valid in Apple's world where, for example, on every mix channel in Logic Pro - except Stereo Out - the user can choose to use the built-in binaural panner option (ever since Logic 8 about 15 years ago).
My tweaking of Side-amplitude of reverb (in order to fit each reverb sector as nicely as possible in with the binauralised dry signals), is hardly a million miles away from MIR 3D's mathematical manipulation of mic Mid and Side signals in order to produce and distribute the 3D surround sound components to appropriate speakers. (Please do correct me if my theory is off here.) Hate to repeat myself but this approach of mine works nicely in practice. And don't get me wrong, MIR 3D reverb works nicely too; it's just that I prefer algorithmic reverb.
And as for the outcome of tacking a binauraliser such as DearVR Micro onto MIR's output mix, I know you know that a few of us have reported the results as being, sorry to say, not especially praiseworthy. My explanation of that outcome, back then and still today, is that the inevitable crosstalk between simulated speaker channels tends to degrade the psychoacoustic cues essential to binauralisation.
Now although I'm not interested in re-visiting old arguments, I do reserve my right to defend my opinions when necessary. But in any case, can we not simply agree to differ? And for my part, I'll do my best to post only in 'on-label' forums!
You are of course allowed to do whatever you want. We all know the old saying, "If it sounds right, it is right." π
I just try to keep things in perspective for the occasional visitor of this forum. For many, things are complex enough already as they are, even with "canonical" methods, when aiming for a listening experience beyond conventional stereo. I think it's just fair to point out possible misunderstandings, like this one:
@Macker said:
Well that's not exactly valid in Apple's world where, for example, on every mix channel in Logic Pro except Stereo Out the user can choose to use the built-in binaural panner option (ever since Logic 8 about 15 years ago).
That's only a question of the viewing angle. Of course you could binauralise each and every channel individually, but now that we have actual 3D DAWs, the options go far beyond these early solutions, so it makes perfect sense to present a full-blown final 3D mix to the audience in a binauralised form. After all, this is what Dolby, Apple and others do with their proprietary formats, too.
@Macker said:
the inevitable crosstalk between simulated speaker channels tends to degrade the psychoacoustic cues essential to binauralisation.
This is were you lost me. The simulated "crosstalk" is an integral component of any kind of binauralisation. If you don't like it, just listen to a conventional stereo mix on your headphones. π
@Ben said:
@jsfotografie said:
Yeah i would also like to know if the Synchron Duality Strings make scense together with MIR Pro 3d? π
cheers janosch
No. Synchron Duality Strings comes with multiple mics and was recorded with Synchron Stage sound in the signal. There is no need for spatialization, since it's already included in the signals.
You should use the Synchron Player mixer options to adapt the sound to your liking instead.
But what could be possible, though - let's leave aside the question if it makes sense or not - is to use Duality Strings in MIR but switch off the mics for Stage A und only use the mics for stage B. So at least there is the chance for a smaller ensemble to be used in MIR.
Or am I on the wrong track?
@Frankenstein said:
But what could be possible, though - let's leave aside the question if it makes sense or not - is to use Duality Strings in MIR but switch off the mics for Stage A und only use the mics for stage B. So at least there is the chance for a smaller ensemble to be used in MIR.
Or am I on the wrong track?
No, not wrong at all. Technically that's a quite meaningful approach, although definitely not the basic concept of Duality Strings. π
@Frankenstein said:
@Ben said:
@jsfotografie said:
Yeah i would also like to know if the Synchron Duality Strings make scense together with MIR Pro 3d? π
cheers janosch
No. Synchron Duality Strings comes with multiple mics and was recorded with Synchron Stage sound in the signal. There is no need for spatialization, since it's already included in the signals.
You should use the Synchron Player mixer options to adapt the sound to your liking instead.
But what could be possible, though - let's leave aside the question if it makes sense or not - is to use Duality Strings in MIR but switch off the mics for Stage A und only use the mics for stage B. So at least there is the chance for a smaller ensemble to be used in MIR.
Or am I on the wrong track?
Yes of course it's possible; I frequently use the smaller Duality B ensemble - and also Elite Strings - in MIR. As Dietz says, this isn't what the Synchron libraries are designed for and you won't get that "orchestra in an amazing concert hall/scoring stage" sound. If, on the other hand, you're trying to get something different, as I sometimes am, fascinating results are there to be had. You can experiment with all the microphone possibilities, with positioning instruments in various ways... MIR is an amazing piece of software.
I must emphasise that I don't ALWAYS do this. When I want that hall/scoring stage sound I stick to the Synchron series as intended but, gorgeous though it is, on some projects that's just not what I want. As long as you understand what you're doing then while you mustn't expect the impossible, you can sometimes come across the fairly astounding.
Thanks, guys. All understood and agreed.
Nevertheless, I feel it could serve the purpose of modeling a chamber orchestra in MIR whereas surely the full strength of Duality Strings comes with the combination of stages A and Band proper mixing.
The topic is a bit older than some others on the topic of Synchron libraries with MIR 3D but it basically contains all the answers in a very clear way. So I pose my question here
In understand that with todayβs Synchron libraries setup and presets, it is only worthwhile to use MIR 3D if you want to do something different than the classical orchestra placement in a large scoring room or orchestral hall. You could use close mics with central pan and without reverb in MIR but according to Dietz this might lead to dIsappointing results.
My question: would it with the samples available to VSL be possible to create a new preset for MIR 3D similar to that provided for the Synchronized libraries? So one which does better than simply using the close mics. An orchestral score using only the Synchron Player compared to this preset processed with MIR 3D with as venue the Synchron Stage could serve as βqualityβ criterion.
It would really be very nice to make use of firstly the 3D features of MIR 3D but also all these marvelous venues for MIR 3D with our Synchron libraries as source.
Well - you know: There is an old English proverb saying "you can't have your cake and eat it". π
As soon as anything other than the spot microphones gets used, the natural reverb of Synchron Stage becomes audible. On the other hand, the spot mics need to be quite close to the source to avoid too much ambience being captured, which means that they won't work too well on their own.
... it was the main feature of VSL's original Silent Stage that it allowed to capture the βwhole instrumentβ with all its resonances and the moving air around it, but without any actual reverb - due to the very unique acoustics in this purpose-built hall. That's where the samples used for VSL's Synchron-ized Instruments come from.
Many thanks Dietz for the fast answer. I feared as much. I did not realise this when I chose to buy the Synchron instead of the Synchronized libraries.
The same issue is probably true for many other libraries registered in well known halls or recoding studios like most Spitfire libraries. All have one or more close microphones but, as you mention, they probably also capture some of the room ambiance while on the other hand missing some elements of the "whole instrument" to be the perfect source for MIR.
I do have one final question. For the mixer presets SY Pro and Elite String libraries use the RoomReverb which I assume is based on the SY Stage responses, however SY Duality Strings, SY Woodwinds, SY Brass, SY Percussion use an algorithmic reverb component for most mixer presets. Why is this?
@Mavros said:
Many thanks Dietz for the fast answer
You're welcome! Sorry to have spoilt the fun to some extent ...
I was not involved in the creation of Synchron Pro or Elite Strings mixer presets, but I'm pretty sure that they don't use IR based reverb. Typically, algorithmic reverb is the obvious choice for "sweetening": The modulation makes it a great addition to the pure natural reverb tails.
... but I'll double-check as soon as I find the time. π
@Mavros said:
I do have one final question. For the mixer presets SY Pro and Elite String libraries use the RoomReverb which I assume is based on the SY Stage responses, however SY Duality Strings, SY Woodwinds, SY Brass, SY Percussion use an algorithmic reverb component for most mixer presets. Why is this?
The RoomReverb is another algorithmic reverb I believe, just a different one. The Synchron Player has several choices for algorithmic reverb.
You can see this thread here where a VSL rep confirmed that the RoomReverb is an algorithmic reverb: https://forum.vsl.co.at/topic/56527/reverb-choices-in-synchron-player-need-more-description
My sensibilities are that there's generally no need to run something that was already recorded in Synchron through a Synchron IR. I don't think any of the "Synchron" branded libraries do this, aside from the weird exception of the Rieger organ that was the only Synchron library not recorded in the Synchron Stage.
Getting back to the original question:β yes, with a modicum of extra care, technical knowledge and consideration, MIR 3D can indeed be used with any and all Synchron libraries. However, the quality of the results can depend largely on a couple of important factors:β
will you be
If the answer to both questions is yes, then what MIR can do is extend Synchron library sound fields quite nicely into the simulated 3D space of an auditorium that is significantly larger than VSL's Stage A. You choose the simulated auditorium space you prefer when you buy your MIR package. Even so, using MIR 3D in this way is not exactly a technically straightforward matter.
Let's not forget here that a scoring stage is not designed to be an auditorium that can accommodate a large audience for concert events; hence the addition of algorithmic reverbs with tails longer than about 1.8 seconds in some of the Synchron mix presets. My own experiments have shown that MIR can, when suitably configured with Synchron Players, achieve this spatial expansion more convincingly than by using Synchron's longer-tail mix presets.
If on the other hand you want to re-arrange the instrument seating plan, then setting up the mix can get significantly more difficult and involved if we're to avoid Synchron's and MIR's psychoacoustic spatial cues contradicting each other. And in the case of mixing for headphones, adding MIR to Synchron can be done (I've done it), but it requires a different and more complicated configuration and mixing approach in order to avoid the dreadful speaker signal crosstalk when attaching a binauraliser plugin to MIR's outputs.
I no longer use my MIR 3D with Synchron libraries. Instead, I've developed a (still complicated) way of extending Synchron Stage A's sound field convincingly into a larger simulated space whilst benefitting from binaurally panned on-stage locations that can deviate substantially from the original Synchron stage placements.
Sorry to say, I can't recommend any of these approaches for all users, simply because the knowledge and skills in audio engineering required are unfortunately (and sometimes annoyingly) a bit beyond the average.