Vienna Symphonic Library Forum
Forum Statistics

194,422 users have contributed to 42,920 threads and 257,965 posts.

In the past 24 hours, we have 4 new thread(s), 10 new post(s) and 79 new user(s).

  • Considering Mir Pro

    I'm just getting into the whole "Engineering" angle so my question may reflect my noobiness...


    I've just downloaded the demo version of MIR Pro (thank you very much VSL!) .... I try to keep as much as possible within the VSL domain. In the main title sequence from Jerry Goldsmith's brilliant Basic Instinct score, he has a "ping pong" delay effect on the xylophone. (https://www.youtube.com/watch?v=G3vkf8e_UcI)
    If I want to be able to reproduce that effect, using Cubase, VE Pro (+ MIR now) do I have to route the xylophone out to Cubase to do the delay, or can I just do it all within my Wonderful VSL world?  If so, how would I do that?

    As an aside, I just have to say, in addition to Special Edition I have acquired several of the very best competitor sample libraries while studying film scoring at UCLA. While there are situations where another library might fit a specific need in a way that SE can't, I find time and again,  that if I want the very best, most realistic result, I ALWAYS end up layering VSL into everyone else's strings! I recently had to turn to LASS for the Sordino I don't have in SE. Even when Sordino was called for, I found adding sans Sordino VSL SE Violins into the Sordino mix was what really made the LASS sordino come alive! And that's just the samples...the VSL software really outstrips the competition as well. Without mentioning any names in an unflattering context.....let me just put it this way, if sample players were people, and I were....uh....a Native (of some place)....with the first initial K.....if I were invited to a cocktail party....I think I would want to make quite certain that VI Pro was not going to be there! It's so uncomfortable to be publicly humiliated like that!

    Thanks for doing what you do so well, VSL.


  • last edited
    last edited

    @PeteyMan said:

    In the main title sequence from Jerry Goldsmith's brilliant Basic Instinct score, he has a "ping pong" delay effect on the xylophone. (https://www.youtube.com/watch?v=G3vkf8e_UcI) 

    If I want to be able to reproduce that effect, using Cubase, VE Pro (+ MIR now) do I have to route the xylophone out to Cubase to do the delay, or can I just do it all within my Wonderful VSL world?  If so, how would I do that?

    I tried it here, with VE-not-Pro. When I tried to open VE-not-Pro in Reaper, Reaper froze. I killed the Reaper process, lost my unsaved work, and restarted Reaper. This time VE-not-Pro opened fine.

    VE-not-Pro lets you insert VST FX after the VI instrument but before MIR. First I tried inserting a ping-pong delay, but my ping-pong delay is a 32-bit VST, and while I think that's possible to use in VE-Not-Pro, I didn't want to take the time to learn how to do it. So I used Amplitube instead. I got it working, but after half a minute it would stop working, and I had to delete Amplitube from within VE-Not-Pro and re-insert it to get it working for another half a mintue. I didn't have time left to try it with other VST's.

    One consideration is MIR's processing of the non-reverb component of the signals you feed into MIR. If you put ping-pong on a xylophone and feed it to MIR, and tell MIR to treat it like a xylophone, MIR creates a virtual xyxlophone on stage and treats the xylophone as if it's 2 meters meters wide, expandable to 3 meters maximum. So your delay ping pongs back and forth 3 meters on a large stage, which is narrower than the Goldsmith sound. You can tell MIR to stop treating it like a xylophone and treat it as an abstract instrument, and set the width much wider, but there's still some processing affecting the width of your xyxlophone.

    You could set up two xylophones, on stage left and stage right, and that would give you additional options.


  • The upshot is, yes, in VE Pro you can use any effect in a signal path to MIR Pro. I would say to think of MIR Pro as an enhancement of the tone after the fact of the delay though.

    Defining the object on stage as a microphone (rather than focused as a particular instrument profile), yes you can make it take up the whole width of the stage and beyond.


  • last edited
    last edited

    @Another User said:

    The upshot is, yes, in VE Pro you can use any effect in a signal path to MIR Pro. I would say to think of MIR Pro as an enhancement of the tone after the fact of the delay though.

    Defining the object on stage as a microphone (rather than focused as a particular instrument profile), yes you can make it take up the whole width of the stage and beyond.

    At maximum width on the "Cardioid" General Purpose profile, you still won't get hard-panned results like in the Goldsmith model. This is because MIR is simulating a real room, and in a real room you wouldn't get hard-panned results either. Before MIR, a hard ping-ponged signal gives -inf dB on the right when it's sounding on the left:

    [IMG]http://i.imgur.com/yD4sCB9.png[/IMG]

    After MIR, the "dry" signal shows up on both channels even when the source signal is hard left:

    [IMG]http://i.imgur.com/jKlTXWH.png[/IMG]

    Because MIR is accurately simulating a real room, which is incompatible with the hard-panned ping-pong delay like in Goldsmith's music.


  • I never said that the ping-pong would be achieved through MIR Pro itself. In fact I suggested to think of it as an enhancement {IE: room color} "after the fact of the ping pong effect". At maximum width settings I'm sure you get maximum width, is all. I wouldn't even bother with that kind of panning per MIR Pro, it isn't its purpose, it virtualizes players on a stage (that probably cannot bounce off of one wall, vanish and appear on the other.) :)

    I have however sent audio to VE Pro on an audio input with MIR Pro inserted and power panned it from right to left and perceived right to left ok, and used mics set to show the whole width. Using a lot of room is typically kind of diffuse for a special effect, so it's a reverb after the fact of a delay or pan maneuver. I would rather define the 'Crossfeed' in Hybrid Reverb for the channel first, as focused so the later reflections are as focused as the power pan, for such a defined effect.


  • last edited
    last edited

    @BachRules said:

     The General Purpose profiles come with names like Cardioid, and that made me think of microphones at first, but now I figure those names are referring to radial patterns emitted by virtual speakers, rather than patterns received by virtual microphones.


    It has to be both, by definition.The depth or width would not happen merely through virtual speakers.


  • last edited
    last edited

    @BachRules said:

     The General Purpose profiles come with names like Cardioid, and that made me think of microphones at first, but now I figure those names are referring to radial patterns emitted by virtual speakers, rather than patterns received by virtual microphones.


    It has to be both, by definition.The depth or width would not happen merely through virtual speakers.

    I was thinking it all happens from the placement of a virtual speaker on the stage, and the virtual speaker has a virtual width and a virtual distance from the MIR main mic. I can't see how a General Purpose profile would simulate a mic.


  • LOL...You guys are so far ahead of me...I have lots of catching up to do....

    Cubase has some delays that come with it, - nothing's in-built in Ve Pro or Mir, right? So If I want to avoid routing in and out of Cubase, I need a third party delay? Is that right? 

    I know if I were just in Cubase, I would need an aux track with a mono delay, and each track panned oposite. Then I add whatever reverb/Eq I need.

    With VEPro in the mix then, what?

    1) Add a track to the xylphone in VE Pro with the [+] on the xyophone track....or create a deicated bus.....does it matter?

    2) Do a send from the xylophone to that track

    3) Route that to an audio track in Cubase.

    4) Put the delay on the Cubase; track

    5) Create a seperate audio in in VE Pro for the return

    6) Route the Cubase track with the delay back into VEPro on the track I  created in 5  (so now the Cubase tack's input is the VEPro track recieving the xylophone send in VEPro, and it's out is going to the new VEPro track I created in step  5)

    7) Put the reverb and EQ's on the original xyophone track and on the additional audio track comming back from Cubase

    It just feels like I've got something wrong, and there must be a simpler way.


  • It is you who is so far ahead of me. MIR and VE do not include a ping-pong delay. I'd use an Insert, not an Aux bus / Send. See VE-not-Pro manual pp. 30 and 39 for Inserts. I'd probably not use MIR for this particular task, though, as I like my ping-pongs hard-panned.


  • last edited
    last edited

    A Cubase effect is not going to be seen by VE Pro, so you would need a third-party plug.

    @PeteyMan said:

    With VEPro in the mix then, what?

    1) Add a track to the xylphone in VE Pro with the [+] on the xyophone track....or create a deicated bus.....does it matter?

    2) Do a send from the xylophone to that track

    3) Route that to an audio track in Cubase.

    I'm assuming you mean you're using Cubase's effect, as you are describing a procedure with a mono 'aux track'...

    In which case 1) & 2) I don't really follow.

    If you have to use the Cubase plugin for the ping pong effect you would send the instrument channel [Xylophone] to the bus in Cubase.

    If you want the result of that effected in MIR Pro hosted in VE Pro, you would now create an FX bus with 'Audio Input' inserted, and send to that bus; which is received as an Input Channel {assigned corresponding to your FX insert} in VE Pro; which is either in the same bus as MIR Pro or you do a send to one. That ultimately is another instrument channel(s) you assign an output to in Cubase.

    SO essentially, if you are not dealing with a third-party ping pong delay hosted in VE Pro but the ping pong procedure is all Cubase, you are using VE Pro as an external effects rack for whatever you're hoping MIR Pro will do for you. As I said, I would consider this no differently than adding some reverb after the delay, only it's routed outside of Cubase (to the [VE Pro] multitimbral instrument/cum FX rack).

    NB: it is two things to Cubase now. It is an external effect - meaning that printing the result means Real-Time Export in Cubase, just as you would do with a hardware device outside of Cubase - but the thing you'll print is an instrument channel in Cubase.


  • last edited
    last edited

    @BachRules said:

     The General Purpose profiles come with names like Cardioid, and that made me think of microphones at first, but now I figure those names are referring to radial patterns emitted by virtual speakers, rather than patterns received by virtual microphones.


    It has to be both, by definition.The depth or width would not happen merely through virtual speakers.

    I was thinking it all happens from the placement of a virtual speaker on the stage, and the virtual speaker has a virtual width and a virtual distance from the MIR main mic. I can't see how a General Purpose profile would simulate a mic.

    I am not apt to do technical writing.  But there is this from the manual [THINK MIR - User Manual Add-On - Rev.1]:

    Ambisonics relies on a meta-audio-format which is not meant to be listened to directly. It allows for decoding of an almost limitless number of actual audio formats, be it broad or narrow stereo, different surround formats, or any other multi-channel format. By defining "virtual" microphones, a dedicated sonic behaviour can be assigned to each channel: The polar patterns as well as the angles of those microphones with regard to the input signal can be controlled after the actual recording.

    http://en.wikipedia.org/wiki/Ambisonics


  • I don't take that to mean xylophones can be defined as microphones.


  • Xylophones would be an instrument profile, adjustable in terms of width and closeness which refers to mic'ing*; I think 'defined as a microphone' and this focus on words is asking the cart to pull the horse really.
    I believe that 'Ambisonics' is how the thing is achieved, period. So you seem to have made a dichotomy that I don't believe and it seems to me is impossible, given the technology.

    * MIR Pro handles directionality (i.e., “room”) both from the listener's perspective (the microphone) as well as from the signal source's perspective (the instrument).

    I don't know why there would be a 'general purpose' mic which would be a different technology than the instrument profile. The 'instrument profile' does take into account known things about VSL library items that aren't known about all other things you'd put in the space, but it doesn't seem like the latter would not be Ambisonics or the same analogy to eg. cardioid and other capsule forms be omitted for every other use than a VSL library item.


  • last edited
    last edited

    @Another User said:

    this focus on words is asking the cart to pull the horse really.
    You wrote about defining xylophones as microphones, and I replied to the words you wrote. I'm not seeing how that's asking a cart to pull a horse.


  • A comparison for better understanding:

    If we would talk about visuals, MIR Pro's Instrument Icons stand for a spotlight with variable brighness, color, width and rotation. An "Omni" would emit the same energy and color into all directions, and "Eighth" would emit its light only to the front and the back, a "Cardioid" would resemble your average streezt lamp ;-) , and so on.

    MIR Pro's Main Microphone would be a 3D-camera, with multiple variable lenses and adjustable sensitivty, and of course free orientation.

    HTH,


    /Dietz - Vienna Symphonic Library
  • Hey Bachrules - thanks for that info - I have no particular desire to go out to Cubase.   How did you do it with Mir Pro in the mix with VE Pro? What pug in did you use? I know it's a pain in the britches...but if you had a few minutes to jot down a step-by-step I'd be as happy as JSB at a Row-Row-Row your boat convention ;)


  • last edited
    last edited

    @BachRules said:

    There is a dichotomy between modelling a microphone and modelling a speaker, and the difference affects the audio coming out of MIR. I didn't create this dichotomoy; I only observed it. You are picking a speaker model when you set an Instrument Channel, and you are picking mic models when you set the Output Channel.

    I don't think so. The one is designed for the other in this case. Dichotomy means mutually exclusive to begin with.


  • last edited
    last edited

    @Another User said:

    A comparison for better understanding:

    If we would talk about visuals, MIR Pro's Instrument Icons stand for a spotlight... MIR Pro's Main Microphone would be a 3D-camera....

    MIR 24 emulates 2 "cameras" (microphones) but 24 "spotlights" (speakers). The concepts are distinct and mutually exclusive, in the manual, in the user-interface, and in the code which implements MIR. You, civilization 3, are talking about defining xylophones as "microphones", and you're doing this in a system where you can define at most 2 microphones. I see now that you've invested a lot in your belief that xylophones can be "defined as microphones", for whatever reason; but I'm going to keep viewing Instrument Icons as spotlights (which I'll call "speakers", since this isn't really about visuals).