Vienna Symphonic Library Forum
Forum Statistics

194,708 users have contributed to 42,932 threads and 258,000 posts.

In the past 24 hours, we have 8 new thread(s), 19 new post(s) and 106 new user(s).

  • last edited
    last edited

    @Another User said:

    Does MIR automatically add delay to the direct signal to simulate the time it takes sound to travel across distance?

    No. This is something that's actually a problem in a real recording situation, which will make appear all instruments in a distance appear later than they were actually meant to be by the player. If you want that behaviour, please use a delay in your MIDI- or audio-track. 😊

    BTW: Dry and wet signal are perfectly time-aligned in MIR Pro - something recording engineers like me can only dream of in the real world. 😉

    HTH,


    /Dietz - Vienna Symphonic Library
  • last edited
    last edited

    I can barely hear the air-absorption effect in this (free):

    [url]http://www.tokyodawn.net/proximity/[/url]

    The air-absorption effect in VSS sounds more pronounced to me.

    @BachRules said:

    Does MIR automatically add delay to the direct signal to simulate the time it takes sound to travel across distance?

    No. This is something that's actually a problem in a real recording situation, which will make appear all instruments in a distance appear later than they were actually meant to be by the player. If you want that behaviour, please use a delay in your MIDI- or audio-track. 😊

    BTW: Dry and wet signal are perfectly time-aligned in MIR Pro - something recording engineers like me can only dream of in the real world. 😉

    HTH,

    I can't picture what correct "time-alignment" would mean, myself having no recording-engineer skills; but I figure you mean you've tuned MIR to avoid bad interference that can result from careless summing.

    This raises another question for me. If you set dry to 0% and wet to 100%, does the wet signal include only reflections off the walls, or does it also include a component which is direct from the source? This is something which confuses me about all convolution reverbs, not just MIR.

    I think you are saying that the wet signal includes a component direct from the source; and this one is delayed according to the speed of sound; and the dry signal isn't delayed; but that would cause phase problems between the wet direct signal and the dry direct signal; so somewhere I must be misundestanding you?

    Thanks for your explanations.


  • last edited
    last edited

    @Another User said:

    I think you are saying that the wet signal includes a component direct from the source; and this one is delayed according to the speed of sound; and the dry signal isn't delayed; but that would cause phase problems between the wet direct signal and the dry direct signal; so somewhere I must be misundestanding you?

    As soon as you don't mix up the meanings of "close mic'ed signal" and "direct signal" any longer you will be fine. 😊 In MIR, the dry, readily positioned input signal is what can be considered the "ideal" close mic. OTOH, like pointed out above: Almost any IR will initially include a direct signal, even if it's barely discernible. That's what we got rid of for MIR, but we put the "close mic's" signal in the same position, which is the ideal solution from an audio-engineering POV and gives you enormous amounts of creative freedom. 😊

    HTH,


    /Dietz - Vienna Symphonic Library
  • last edited
    last edited

    @Dietz said:

    As soon as you don't mix up the meanings of "close mic'ed signal" and "direct signal" any longer you will be fine. 😊 In MIR, the dry, readily positioned input signal is what can be considered the "ideal" close mic.

    I don't know if I'm fine yet. I'm still adjusting to the fact that something named "dry" is complexly processed. Here's what I'm thinking now: MIR creates the "dry" signal by doing the following to the source signal:

    1. applying the selected Character EQ;
    2. applying the selected Room EQ;
    3. adjusting its stereo width;
    4. panning it;
    5. amplifying or attenuating it according to its direction and selected radial pattern (if Dry Directivity is selected); and
    6. amplifying or attenuating it according to its distance from the Main Mic (if Global Dry Volume Handling: Distance Dependent Scaling is selcted);
    7. delaying it according to its distance from the MIR Main Microphone.

    When you tell me "Dry and wet signal are perfectly time-aligned in MIR Pro", that means the same thing as (7) in my list above. If I'm wrong about (7), then how are you determing how much to delay the dry signal for the purpose of "time alignment with the wet signal"?

    Does my list omit any other processing used in creation of the "dry" signal?
    What does "Consider Microphone Offset" do?
    Thank you.
    Why is it so hard to put spaces between my paragraphs. I'm going to stop spending time on that. I don't think whoever made this forum software wants me to space my paragraphs.

  • last edited
    last edited

    @BachRules said:

    Why is it so hard to put spaces between my paragraphs. I'm going to stop spending time on that. I don't think whoever made this forum software wants me to space my paragraphs.

    Our forum software is known to be quite peculiar when it comes to formatting a message. Have you tried to apply the following changes already ...?


    /Dietz - Vienna Symphonic Library
  • last edited
    last edited

    @Another User said:

    What does "Consider Microphone Offset" do?

    It means that the distance created by manually moving the Main Maicrophone away from its original position will be taken into account, too, when applying the Distance Dependent Scaling to a dry signal's volume. This might not always be what you want to hear, but it should be considered to be the norm (that's why it's "On" by default).

    ... I hardly dare to hope that this answered all your questions  ....?

    😉


    /Dietz - Vienna Symphonic Library
  • Hi Dietz

    It is very good to hear this specific information.  One other thing I was wondering is if you took this 100% "dry" signal out of MIR and routed it into a hardware reverb, would it have the directional and spatial qualities of the concert hall, but the reverb of the hardware?  Or would it be missing some basic signal processing (aside from the great quality of MIR's own reverb). 

    The reason I ask is I have hardware reverb I still really like and was thinking of doing that - using MIR for placement and image, and a Lexicon for the actual reverb space of the hall. 


  • last edited
    last edited

    @William said:

    Hi Dietz

    It is very good to hear this specific information.  One other thing I was wondering is if you took this 100% "dry" signal out of MIR and routed it into a hardware reverb, would it have the directional and spatial qualities of the concert hall, but the reverb of the hardware?  Or would it be missing some basic signal processing (aside from the great quality of MIR's own reverb). 

    The reason I ask is I have hardware reverb I still really like and was thinking of doing that - using MIR for placement and image, and a Lexicon for the actual reverb space of the hall.

    Sure you can do that! Just reduce the reverb length and/or the dry/wet ratio to values that fit you needs, and use MIR as a powerful Ambisonics panner! The dry signals will keep all their volume-,  panning- and width-setting.

    In case you don't have access to a Lexicon, there's MIRacle that comes as add-on with MIR Pro for exactly that purpose.

    HTH,


    /Dietz - Vienna Symphonic Library
  • Thanks Dietz, I will try that. 


  • last edited
    last edited

    @BachRules said:

    Why is it so hard to put spaces between my paragraphs. I'm going to stop spending time on that. I don't think whoever made this forum software wants me to space my paragraphs.

    Our forum software is known to be quite peculiar when it comes to formatting a message. Have you tried to apply the following changes already ...?

    Thanks, Dietz. Yes, I have made that change already -- actually someone made it for me:

    [url]http://community.vsl.co.at/forums/t/37424.aspx[/url]

    But the forum software continues occasionally omitting spaces I put between paragraphs. I was working around it by going into html mode to edit my posts and add formatting, but when even that didn't work, I realized I need my time for other things. I surrender to the VSL forum software and my posts will be erratically formatted from now on, because that is the will of the VSL forum software.


  • last edited
    last edited

    @Another User said:

    No delays are applied to the dry signal.....

    You say there is time-alignment. Time alignment is accomplished by applying delay to signals. And yet you say no delay is applied to the dry signal. So I am inferring that time-alignment is done by adding a negative delay to the wet signal, as that seems the only possibility left, since you say no delay is added to the dry signal.


  • last edited
    last edited

    @Another User said:

    You say there is time-alignment. Time alignment is accomplished by applying delay to signals. And yet you say no delay is applied to the dry signal. So I am inferring that time-alignment is done by adding a negative delay to the wet signal, as that seems the only possibility left, since you say no delay is added to the dry signal.

    The time-alignement is done when preparing the raw impulse responses for MIR. The runtime delay and the remnant direct signal is carefully removed during editing. The original direct signal component is than replaced by the readily positioned dry signal in MIR Pro, eleminating any phasing- or delay issues between the two of them. - Like I wrote before: A recording engineer's dream. 😉

    Best,


    /Dietz - Vienna Symphonic Library
  • last edited
    last edited

    @BachRules said:

    I want to split a signal coming out of VI, and I want to send one branch of the signal to MIR Pro, and then I want to mix (1) the raw signal with (2) MIR-Dry;

    ... What are you trying to achieve ...?

    MIR seems excellent at simulating what a listener would hear seated in a real room with the players. MIR seems to do this by default.

    A different goal of mine is to simulate the sound of orchestra professionally produced for playback on a home stereo system. Offhand, I recall reading that this sound has commonly been achieved by summing a Decca tree above the conductor with close mics. You say here that MIR can't exactly simulate that scheme:

    [url]http://community.vsl.co.at/forums/t/33389.aspx[/url]

    so I'm looking for ways to approximate it. In that thread you recommend: "... you can come close by using a nearby position for the Secondary Microphone."

    I have doubts about that, since there's only one Secondary Mic, positioned at only one location in the room, whereas I thought orchestral recording for home listening were generally utilizing numerous close mics dedicated to particular sections and soloists. (?) So I'm looking for ways to simulate those close mics, or at least the end result when they're summed with a Decca tree.

    Now I'm thinking you mean 2nd Mic can be used (in conjunction with 1st Mic) to help simulate the sound from a Decca tree itself, while MIR-Dry signals used to simulate the close mics, and that might be all I need, without a need to mix in the raw signal. Before this discussion, I hadn't realized it would be perilous to mix in the raw signal, as that's the regular practice with regular reverbs.

    Decca-tree simulation might not be the best for accomplishing what I want; I'm a mixing novice, but I at least know I am not wanting to simulate what a listener in the 7th row would hear, as the experts making CD's for home listening aren't in the practice of putting a mic in the 7th row and leaving it at that.


  • My personal strategy is to use a close mic with a wide stereo field at the "Conductor" position, then even a little bit closer to the orchestra, and then a secondary mic way toward the back.  It doesn't sound like a person sitting in the orchestra.  It allows me to mix in the dry sound as I wish plus the room sound as I wish and gets much closer to simulating classical recordings than the effect of just putting a microphone out in the audience.  I've found that the 7th row default setting doesn't get the kind of clarity that a real recording with its close mics will have and yet doesn't quite pick up the warmth of the room either, so I'll place the Conductor mic as close as humanly possible to the instruments and the secondary mic pretty far back and play with the balance between the two.  I then play with dry/wet on an individual instrument basis depending on whether or not it's a VSL sample and whether or not I imagine the instrument's sound would be absorbed or would stand out in a real orchestral recording, and more importantly, I'll play with the character presets on the individual instruments to achieve a similar effect as well.  Might be worth trying out if you're not going for an audience-member sound.


  • last edited
    last edited
    [quote=BachRules]Now I'm thinking you mean

    I think now you got it! 😊 That's the way to use MIR. - The maybe most-quoted sentence in this forum is "MIR is _not_ just another reverb." It is meant to be a virtual 3-dimensional room and should be treated as such.

    Sidenote: A Decca-tree is just one of the many possibilities to set up a main microphone array for orchestral recording. The fact that is was propagandized by a well-known record company (Decca) might be one of the reasons for its popularity - but other setups are able to surpass the "tree" in several aspects.

    It might be a good idea to take a look at the built-in MIRx-settings and/or all the "Venue Presets" available from your User Area to get a feeling for the typical use of MIR Pro.

    Kind regards,


    /Dietz - Vienna Symphonic Library
  • last edited
    last edited

    @Casiquire said:

    My personal strategy is to use a close mic with a wide stereo field at the "Conductor" position, then even a little bit closer to the orchestra, and then a secondary mic way toward the back.  It doesn't sound like a person sitting in the orchestra.  It allows me to mix in the dry sound as I wish plus the room sound as I wish and gets much closer to simulating classical recordings than the effect of just putting a microphone out in the audience.  I've found that the 7th row default setting doesn't get the kind of clarity that a real recording with its close mics will have and yet doesn't quite pick up the warmth of the room either, so I'll place the Conductor mic as close as humanly possible to the instruments and the secondary mic pretty far back and play with the balance between the two.  I then play with dry/wet on an individual instrument basis depending on whether or not it's a VSL sample and whether or not I imagine the instrument's sound would be absorbed or would stand out in a real orchestral recording, and more importantly, I'll play with the character presets on the individual instruments to achieve a similar effect as well.  Might be worth trying out if you're not going for an audience-member sound.

    Thanks for your help with this. I haven't even begun experimenting with the 2nd Mic yet, but when I do I'll try your suggestions.

    [quote=Dietz]... It might be a good idea to take a look at the built-in MIRx-settings and/or all the "

    The "Venue Presets" are unavailable to me since I don't have VE Pro, but MIRx-in-MIRPro is very helpful.


  • I understand that concept bachrules mentioned of recreating the instruments within the listener's space, which is indeed the ideal of many of the great classical recordings of the past.  Such as my favorites, the 70s London FFRR Mahler recordings, which include Solti's 8th and 3rd.    It is totally different from recreating a concert hall experience.  It is instead, trying to place the instruments into the room that the record player/CD player/docked ipod/whatever is in.  I've thought of doing that but because the Vienna Konzerthaus and other spaces in MIR are so awesomely great I haven't yet gone ahead with it.  Though I've thought that one could simply use far more dry signal within a provided MIR venue, to accomplish this.  That approach could actually be the most like how this would be accomplished in live recording - close miking within a consistent concert hall/recording venue.

    Anyway I greatly appreciate being able to get this info from Dietz - a rare opportunity to hear from a master about mixing.


  • last edited
    last edited

    @Another User said:

    ... spaced microphones. These microphones capture sounds at differing times because of their physical separation, and so record time-of-arrival information in the two channels.... Replaying a stereo recording with timing differences between the two channels leads to a confusing set of time-of-arrival differences for our ears, but the sound is normally still perceived as having width and a certain amount of imaging information, and it usually sounds a lot more spacious than a coincident recording.... Many people specifically prefer the stereo presentation of spaced-pair recordings, finding them easier to listen to than coincident recordings.
    Is it possible to get such timing differences out of MIR (outputting to stereo)? I note from the MIR manual,
    The Distance parameter in the Output Format Editor allows for the virtual creation of non-coincident microphone arrays (which the Ambisonics microphone always is by definition). Using sophisticated decorrelation algorithms, the reverb produced by MIR Pro is greatly enhanced in perceived spaciousness, without sacrificing the all-important localization cues of the original impulse responses.
    By increasing that Distance parameter, can I get MIR to simulate the timing differences which would be recorded by a Decca tree? I realize MIR is meant to be a virtual 3-dimensional room, and the Decca-style timing differences I'm asking about might be inconsistent with that.

  • last edited
    last edited

    @BachRules said:

    By increasing that Distance parameter, can I get MIR to simulate the timing differences which would be recorded by a Decca tree? I realize MIR is meant to be a virtual 3-dimensional room, and the Decca-style timing differences I'm asking about might be inconsistent with that.
     

    The "Distance" which MIR is able to create between the individual capsules of the Main Microphone array is of course a virtual one. But as such it allows for a "best of both worlds" approach, combining the benefits of both coincident and spaced microphone arrays (which wouldn't be possible in reality): "Distance" creates additional decorrelation between the individual channels, but only in the late part of the impulse response - that's what we would call "reverb" most of the time. Like that, the perceived "envelopping" is greatly enhanced. The Direct signal and the early reflection phase (about 200 - 300 ms) remains unprocessed, thus keeping the perfect imaging of the coincident array intact.

    ... adding a Secondary Microphone will add actual run-time delays and even more enveloping and "depth", but as it will provide wet signal only, the imaging of the Main Mic won't get distorted too much.

    Kind regards,


    /Dietz - Vienna Symphonic Library
  • last edited
    last edited

    @BachRules said:

    By increasing that Distance parameter, can I get MIR to simulate the timing differences which would be recorded by a Decca tree? I realize MIR is meant to be a virtual 3-dimensional room, and the Decca-style timing differences I'm asking about might be inconsistent with that.
     

    The "Distance" which MIR is able to create between the individual capsules of the Main Microphone array is of course a virtual one. But as such it allows for a "best of both worlds" approach, combining the benefits of both coincident and spaced microphone arrays (which wouldn't be possible in reality): "Distance" creates additional decorrelation between the individual channels, but only in the late part of the impulse response - that's what we would call "reverb" most of the time. Like that, the perceived "envelopping" is greatly enhanced. The Direct signal and the early reflection phase (about 200 - 300 ms) remains unprocessed, thus keeping the perfect imaging of the coincident array intact.

    ... adding a Secondary Microphone will add actual run-time delays and even more enveloping and "depth", but as it will provide wet signal only, the imaging of the Main Mic won't get distorted too much.

    Kind regards,

    Thanks. That's all my questions, at least for now. Thanks for your outstanding support.