Vienna Symphonic Library Forum
Forum Statistics

194,422 users have contributed to 42,920 threads and 257,965 posts.

In the past 24 hours, we have 4 new thread(s), 10 new post(s) and 79 new user(s).

  • Hi Dietz

    It is very good to hear this specific information.  One other thing I was wondering is if you took this 100% "dry" signal out of MIR and routed it into a hardware reverb, would it have the directional and spatial qualities of the concert hall, but the reverb of the hardware?  Or would it be missing some basic signal processing (aside from the great quality of MIR's own reverb). 

    The reason I ask is I have hardware reverb I still really like and was thinking of doing that - using MIR for placement and image, and a Lexicon for the actual reverb space of the hall. 


  • last edited
    last edited

    @William said:

    Hi Dietz

    It is very good to hear this specific information.  One other thing I was wondering is if you took this 100% "dry" signal out of MIR and routed it into a hardware reverb, would it have the directional and spatial qualities of the concert hall, but the reverb of the hardware?  Or would it be missing some basic signal processing (aside from the great quality of MIR's own reverb). 

    The reason I ask is I have hardware reverb I still really like and was thinking of doing that - using MIR for placement and image, and a Lexicon for the actual reverb space of the hall.

    Sure you can do that! Just reduce the reverb length and/or the dry/wet ratio to values that fit you needs, and use MIR as a powerful Ambisonics panner! The dry signals will keep all their volume-,  panning- and width-setting.

    In case you don't have access to a Lexicon, there's MIRacle that comes as add-on with MIR Pro for exactly that purpose.

    HTH,


    /Dietz - Vienna Symphonic Library
  • Thanks Dietz, I will try that. 


  • last edited
    last edited

    @BachRules said:

    Why is it so hard to put spaces between my paragraphs. I'm going to stop spending time on that. I don't think whoever made this forum software wants me to space my paragraphs.

    Our forum software is known to be quite peculiar when it comes to formatting a message. Have you tried to apply the following changes already ...?

    Thanks, Dietz. Yes, I have made that change already -- actually someone made it for me:

    [url]http://community.vsl.co.at/forums/t/37424.aspx[/url]

    But the forum software continues occasionally omitting spaces I put between paragraphs. I was working around it by going into html mode to edit my posts and add formatting, but when even that didn't work, I realized I need my time for other things. I surrender to the VSL forum software and my posts will be erratically formatted from now on, because that is the will of the VSL forum software.


  • last edited
    last edited

    @Another User said:

    No delays are applied to the dry signal.....

    You say there is time-alignment. Time alignment is accomplished by applying delay to signals. And yet you say no delay is applied to the dry signal. So I am inferring that time-alignment is done by adding a negative delay to the wet signal, as that seems the only possibility left, since you say no delay is added to the dry signal.


  • last edited
    last edited

    @Another User said:

    You say there is time-alignment. Time alignment is accomplished by applying delay to signals. And yet you say no delay is applied to the dry signal. So I am inferring that time-alignment is done by adding a negative delay to the wet signal, as that seems the only possibility left, since you say no delay is added to the dry signal.

    The time-alignement is done when preparing the raw impulse responses for MIR. The runtime delay and the remnant direct signal is carefully removed during editing. The original direct signal component is than replaced by the readily positioned dry signal in MIR Pro, eleminating any phasing- or delay issues between the two of them. - Like I wrote before: A recording engineer's dream. 😉

    Best,


    /Dietz - Vienna Symphonic Library
  • last edited
    last edited

    @BachRules said:

    I want to split a signal coming out of VI, and I want to send one branch of the signal to MIR Pro, and then I want to mix (1) the raw signal with (2) MIR-Dry;

    ... What are you trying to achieve ...?

    MIR seems excellent at simulating what a listener would hear seated in a real room with the players. MIR seems to do this by default.

    A different goal of mine is to simulate the sound of orchestra professionally produced for playback on a home stereo system. Offhand, I recall reading that this sound has commonly been achieved by summing a Decca tree above the conductor with close mics. You say here that MIR can't exactly simulate that scheme:

    [url]http://community.vsl.co.at/forums/t/33389.aspx[/url]

    so I'm looking for ways to approximate it. In that thread you recommend: "... you can come close by using a nearby position for the Secondary Microphone."

    I have doubts about that, since there's only one Secondary Mic, positioned at only one location in the room, whereas I thought orchestral recording for home listening were generally utilizing numerous close mics dedicated to particular sections and soloists. (?) So I'm looking for ways to simulate those close mics, or at least the end result when they're summed with a Decca tree.

    Now I'm thinking you mean 2nd Mic can be used (in conjunction with 1st Mic) to help simulate the sound from a Decca tree itself, while MIR-Dry signals used to simulate the close mics, and that might be all I need, without a need to mix in the raw signal. Before this discussion, I hadn't realized it would be perilous to mix in the raw signal, as that's the regular practice with regular reverbs.

    Decca-tree simulation might not be the best for accomplishing what I want; I'm a mixing novice, but I at least know I am not wanting to simulate what a listener in the 7th row would hear, as the experts making CD's for home listening aren't in the practice of putting a mic in the 7th row and leaving it at that.


  • My personal strategy is to use a close mic with a wide stereo field at the "Conductor" position, then even a little bit closer to the orchestra, and then a secondary mic way toward the back.  It doesn't sound like a person sitting in the orchestra.  It allows me to mix in the dry sound as I wish plus the room sound as I wish and gets much closer to simulating classical recordings than the effect of just putting a microphone out in the audience.  I've found that the 7th row default setting doesn't get the kind of clarity that a real recording with its close mics will have and yet doesn't quite pick up the warmth of the room either, so I'll place the Conductor mic as close as humanly possible to the instruments and the secondary mic pretty far back and play with the balance between the two.  I then play with dry/wet on an individual instrument basis depending on whether or not it's a VSL sample and whether or not I imagine the instrument's sound would be absorbed or would stand out in a real orchestral recording, and more importantly, I'll play with the character presets on the individual instruments to achieve a similar effect as well.  Might be worth trying out if you're not going for an audience-member sound.


  • last edited
    last edited
    [quote=BachRules]Now I'm thinking you mean

    I think now you got it! 😊 That's the way to use MIR. - The maybe most-quoted sentence in this forum is "MIR is _not_ just another reverb." It is meant to be a virtual 3-dimensional room and should be treated as such.

    Sidenote: A Decca-tree is just one of the many possibilities to set up a main microphone array for orchestral recording. The fact that is was propagandized by a well-known record company (Decca) might be one of the reasons for its popularity - but other setups are able to surpass the "tree" in several aspects.

    It might be a good idea to take a look at the built-in MIRx-settings and/or all the "Venue Presets" available from your User Area to get a feeling for the typical use of MIR Pro.

    Kind regards,


    /Dietz - Vienna Symphonic Library
  • last edited
    last edited

    @Casiquire said:

    My personal strategy is to use a close mic with a wide stereo field at the "Conductor" position, then even a little bit closer to the orchestra, and then a secondary mic way toward the back.  It doesn't sound like a person sitting in the orchestra.  It allows me to mix in the dry sound as I wish plus the room sound as I wish and gets much closer to simulating classical recordings than the effect of just putting a microphone out in the audience.  I've found that the 7th row default setting doesn't get the kind of clarity that a real recording with its close mics will have and yet doesn't quite pick up the warmth of the room either, so I'll place the Conductor mic as close as humanly possible to the instruments and the secondary mic pretty far back and play with the balance between the two.  I then play with dry/wet on an individual instrument basis depending on whether or not it's a VSL sample and whether or not I imagine the instrument's sound would be absorbed or would stand out in a real orchestral recording, and more importantly, I'll play with the character presets on the individual instruments to achieve a similar effect as well.  Might be worth trying out if you're not going for an audience-member sound.

    Thanks for your help with this. I haven't even begun experimenting with the 2nd Mic yet, but when I do I'll try your suggestions.

    [quote=Dietz]... It might be a good idea to take a look at the built-in MIRx-settings and/or all the "

    The "Venue Presets" are unavailable to me since I don't have VE Pro, but MIRx-in-MIRPro is very helpful.


  • I understand that concept bachrules mentioned of recreating the instruments within the listener's space, which is indeed the ideal of many of the great classical recordings of the past.  Such as my favorites, the 70s London FFRR Mahler recordings, which include Solti's 8th and 3rd.    It is totally different from recreating a concert hall experience.  It is instead, trying to place the instruments into the room that the record player/CD player/docked ipod/whatever is in.  I've thought of doing that but because the Vienna Konzerthaus and other spaces in MIR are so awesomely great I haven't yet gone ahead with it.  Though I've thought that one could simply use far more dry signal within a provided MIR venue, to accomplish this.  That approach could actually be the most like how this would be accomplished in live recording - close miking within a consistent concert hall/recording venue.

    Anyway I greatly appreciate being able to get this info from Dietz - a rare opportunity to hear from a master about mixing.


  • last edited
    last edited

    @Another User said:

    ... spaced microphones. These microphones capture sounds at differing times because of their physical separation, and so record time-of-arrival information in the two channels.... Replaying a stereo recording with timing differences between the two channels leads to a confusing set of time-of-arrival differences for our ears, but the sound is normally still perceived as having width and a certain amount of imaging information, and it usually sounds a lot more spacious than a coincident recording.... Many people specifically prefer the stereo presentation of spaced-pair recordings, finding them easier to listen to than coincident recordings.
    Is it possible to get such timing differences out of MIR (outputting to stereo)? I note from the MIR manual,
    The Distance parameter in the Output Format Editor allows for the virtual creation of non-coincident microphone arrays (which the Ambisonics microphone always is by definition). Using sophisticated decorrelation algorithms, the reverb produced by MIR Pro is greatly enhanced in perceived spaciousness, without sacrificing the all-important localization cues of the original impulse responses.
    By increasing that Distance parameter, can I get MIR to simulate the timing differences which would be recorded by a Decca tree? I realize MIR is meant to be a virtual 3-dimensional room, and the Decca-style timing differences I'm asking about might be inconsistent with that.

  • last edited
    last edited

    @BachRules said:

    By increasing that Distance parameter, can I get MIR to simulate the timing differences which would be recorded by a Decca tree? I realize MIR is meant to be a virtual 3-dimensional room, and the Decca-style timing differences I'm asking about might be inconsistent with that.
     

    The "Distance" which MIR is able to create between the individual capsules of the Main Microphone array is of course a virtual one. But as such it allows for a "best of both worlds" approach, combining the benefits of both coincident and spaced microphone arrays (which wouldn't be possible in reality): "Distance" creates additional decorrelation between the individual channels, but only in the late part of the impulse response - that's what we would call "reverb" most of the time. Like that, the perceived "envelopping" is greatly enhanced. The Direct signal and the early reflection phase (about 200 - 300 ms) remains unprocessed, thus keeping the perfect imaging of the coincident array intact.

    ... adding a Secondary Microphone will add actual run-time delays and even more enveloping and "depth", but as it will provide wet signal only, the imaging of the Main Mic won't get distorted too much.

    Kind regards,


    /Dietz - Vienna Symphonic Library
  • last edited
    last edited

    @BachRules said:

    By increasing that Distance parameter, can I get MIR to simulate the timing differences which would be recorded by a Decca tree? I realize MIR is meant to be a virtual 3-dimensional room, and the Decca-style timing differences I'm asking about might be inconsistent with that.
     

    The "Distance" which MIR is able to create between the individual capsules of the Main Microphone array is of course a virtual one. But as such it allows for a "best of both worlds" approach, combining the benefits of both coincident and spaced microphone arrays (which wouldn't be possible in reality): "Distance" creates additional decorrelation between the individual channels, but only in the late part of the impulse response - that's what we would call "reverb" most of the time. Like that, the perceived "envelopping" is greatly enhanced. The Direct signal and the early reflection phase (about 200 - 300 ms) remains unprocessed, thus keeping the perfect imaging of the coincident array intact.

    ... adding a Secondary Microphone will add actual run-time delays and even more enveloping and "depth", but as it will provide wet signal only, the imaging of the Main Mic won't get distorted too much.

    Kind regards,

    Thanks. That's all my questions, at least for now. Thanks for your outstanding support.


  • You're welcome! :-) Enjoy MIR Pro.


    /Dietz - Vienna Symphonic Library
  • last edited
    last edited

    @Dietz said:

    ... Soundwise, a dry signal in MIR has already underwent the following changes in comparison to the raw input signal:

    -- Encoding to Ambisonics B-Format (... this process is completely invisble to the user).

    -- Decoding to the selected Output Format according to the chosen position, rotation and stereo width on a Venue's stage.....

    Encoding mono signals into Ambisonics B-Format is described here:

    [url]http://www.york.ac.uk/inst/mustech/3d_audio/ambis2.htm[/url]

    but that only describes the encoding of mono signals; and the other Ambisonics documentation I've found also is limited to the encoding of mono raw signals. Is there an Ambisonics standard for encoding stereo raw signals? Without understanding how MIR encodes raw stereo signals, I wonder the consequences of feeding MIR raw stereo signals which were recorded into spaced microphones?

    Does MIR effectively position the incoming raw left and right signals to the locations on the MIR stage represented by the extreme left and right of the instrument Icon? Like you're setting up one speaker on the extreme left and a second speaker on the extreme right of the instrument icon, and the speakers are playing respectively the left and right channels of the raw signal?


  • last edited
    last edited

    @BachRules said:

    [...] I wonder the consequences of feeding MIR raw stereo signals which were recorded into spaced microphones?

    Does MIR effectively position the incoming raw left and right signals to the locations on the MIR stage represented by the extreme left and right of the instrument Icon? Like you're setting up one speaker on the extreme left and a second speaker on the extreme right of the instrument icon, and the speakers are playing respectively the left and right channels of the raw signal?

    Yes, the positioning of a stereo-source is achieved by treating left and right channel as individual mono-sources. Still they share all other aspects determined by the chosen Instrument Profile as well as the Icon's Volume, Chracter, Dry / Wet Ration and Rotation.

    ... that said I should add that MIR is not meant to be used for complex signals like mixes of different individual instruments (opposed to ensembles of identical instruments). While it will work on a technical level, much of MIR's "magic" comes from the fact that the MIR engine is able to treat _individual_ sources differently, depending on their typical characteristics and their positions. The same is true for pre-panned sources: For best results, the signal-inherent panning information should be compensated before feeding it into MIR.

    Kind regards,


    /Dietz - Vienna Symphonic Library
  • Thanks Dietz.