Vienna Symphonic Library Forum
Forum Statistics

194,444 users have contributed to 42,922 threads and 257,971 posts.

In the past 24 hours, we have 6 new thread(s), 15 new post(s) and 74 new user(s).

  • Logic X MIDI track delay settings ignored by VIs

    This might be a Logic X issue, rather than one relating specifically to VEPro, but I've noticed the following problem and can't find any info about it elsewhere:

    I'm hosting several sample libraries in VEPro instances on both my master and slave PCs (MIDI send tracks, multiport routing and aux return tracks). I want to be able to set MIDI track delay offsets to ensure all instruments are fully synchronised (allowing for articulations with slow or fast attacks).

    I've found that Logic X region delays work as expected, but track delay settings are ignored. This only seems to affect MIDI sends to separate VI tracks; track delays work fine on solo software instrument tracks. The issue seems unrelated to plugin delay compensation.

    UPDATE:

    Having done more tests, this seems to be purely a Logic issue/bug (affecting both Logic 9 and X), so I'll report it to Apple:

    Solo 'Software Instrument' tracks respond both to region and track delays (set in the region parameter boxes). But if you use an 'External MIDI' track to control VIs via the Environment, any track delay settings are ignored (although other track parameter changes, e.g. velocity and transposition, work fine).

    As mentioned above, this makes it very difficult to set up default timing offsets on tracks when using the standard multiport VEPro MIDI routing. Yes, you can retrospectively apply region delays, but a) this is a laborious workaround and b) region delays can't be directly set in milliseconds (only in ticks).

    Roger


  • In short, it appears that the Track Delay parameter on 'External MIDI' tracks has no effect on the MIDI output. In which case surely it should be fixed or greyed out. I've tried reporting this to Apple, but their Support Forum is currently offline...

    By comparison, Cubase has a simple 'Track Delay in milliseconds' slider - exactly what I need!


  • I have also tested this behavior and have reported to Apple over the course of the past year. They are aware of the issue; whether it will be resolved is unknown.

    I think it is completely Apple/Logic related, not 3rd party software.

    My findings for Logic:

    External MIDI Tracks: MDI Track Delay works for positive number delays up to around 60 MS. Negative number delays are unreliable, but may work up to around -30 MS

    Software Instrument tracks using the "External Instrument" plugin: All live MIDI input (like playing a MIDI keyboard) is delayed by approximately an audio buffer size. On playback positive and negative number track delays work up to about +/- 250 MS: Note the transport must be stopped and started for a user change in Software instrument Track Delay to take effect.

    Also note that, with Logic and most other DAWs when there is a lot of MIDI being generated by the DAW MIDI microtiming, including per-track delays, becomes unreliable.


  • Many thanks for the detailed analysis of the problem - very useful info.

    Good point about the general unreliability of MIDI microtiming when traffic is heavy.


  • Given this issue with MIDI track delay settings, I'd be interested to know how other Logic users manage timing offsets when working with large templates hosting hundreds of instruments within (relatively few) VEPro instances.

    Options which occur to me include:

    1. Apply offsets to the VEP audio return tracks (problematic if you mix multiple instruments down to grouped returns within VEP)
    2. Apply offsets within each individual VI: Kontakt, Play etc. (difficult to implement and doesn't permit negative delays).
    3. Prior to final mix, glue all regions together on each track and apply region delays.
    4. Bounce all tracks to audio (not just stems or submixes) and adjust offsets before final mix.
    5. Give up on Logic and switch to using Cubase!

    Have I missed any other more workable options? I guess this is part of the reason why Logic is best suited to one instrument per track, rather than multi-timbral working.

    Perhaps I'm being idealistic hoping to build default offsets into my template. I know many composers would argue that option 4 is the only realistic way of guaranteeing good synchronisation due to the inherent problems with MIDI timing. But surely it should be possible to pre-configure ballpark offsets - e.g. if you know that instrument x always speaks 40ms faster than instrument y?!

    All thoughts and suggestions welcome!


  • Regarding "perfect sync"

    Choosing from the list above I'd guess #3 works best, but is super time consuming since delay per region would have to be set by hand.

    In my case we have decided not to use VEPro for various reasons, mostly relating to Logic X. We do use track delays within the time boundaries of what Logic actually executes.

    While changing DAWs might help Cubase, and every DAW, has inconsistent MIDI outut timing, especially when many MIDI events occur simultaneously.

    There is also the fact that round-robin sampler setups mean that any single track delay will be inaccurate, by some margin, for some individual samples. In some cases freezing or re-setting round robins can help address this. We just fudge this and set an average track delay.

    In general, the more "realistic" a synth orchestra is the more likely every MIDI playback will be somewhat different. 

    In the case of full orchestra MIDI mockups it is worth considering what the indended outcome is. Say there's a pronounced ensemble "hit" at an exact point, like beat 1. The graphical depiction of the MIDI in the session should be right at the GUI beat one line. How should it sound?

    In the "real world" ensembles of instruments will speak at different times based on the player and a host of other variables. This range of variability is probably as wide or wider than the inconsistencies of MIDI output from/processing inside a DAW.

    I suggest that if a real ensemble somehow managed to play such that all of their sounds reached a listener starting at an exact point in time, the listener would react to the effect as "fake" or "mechanical." So, unless a fake or mechanical effect is desired, this kind of synchrony is undesireable.

    In fact, much of the advancements in sample libraries have been intended to reduce just this effect. Trying to program it out is bound to fail.

    Sometimes the samples within a sample library are edited poorly. In these cases, if extremely accurate timing is a high priority, the library is unusable.

    So, if the answer to "how should it sound?" is "somewhat natural, or believable" perfectly timed MIDI processing is probably not the most important consideration.

    Maybe the correct question is not "how should it sound?" but is rather "how should the MIDI, once translated into audio, be captured as an audio file? And how will that interact with a video synchronized to the same sync protocol used when creating the MIDI data?" This is the need of people using synth orchestras in a soundtrack. It isn't the "MIDI," or even the immediate audio output. It's the capture of audio output as a PCM file that matters--and, most importantly, matters when placing that audio into a video project.

    To address these questions the following considerations must be made:

    1. How is the synth orchestra synchronizing to the video track? MTC, LTC, or internal to the DAW?
    2. How accurate is the timing relationship between the DAW musical (bars, beats, units or bbu) timeline and the DAW video (hours mintes seconds frames or hh:mm:ss:ff) timeline?

    Regarding (1) many professional systems use MTC to sync the musical and video timelines. MTC has a resultion of 1/4 to 2 frames, 10 to 80 MS at 24fps. this range is large enough to introduce musically significant asynchrony. Given this coarse timing, creating a truly tight (say 5MS or less) audio output from a MIDI DAW session is mostly unrealistic. LTC, which is not native to most DAW systems, can be significantly more accurate, in my tests within a few samples. Keeping the entire project inside one DAW is also much better for overall timing accuracy.

    Regarding (2) one of the big surprises of working with Logic has been the doscovery that its MTC output was influenced by the musical timeline. This has been resolved in Logic 10.2.2. Another consideration is the fact that the MIDI tempo map may be interpreted differently by different MIDI file readers. As a result a tempo map created in Logic may run differently when opened in ProTools.

    So, the answer to "how should the MIDI, once translated into audio, be captured as an audio file? And how will that interact with a video synchronized to the same MTC used when creating the MIDI data?" is, this depends on a lot of stuff, but in most cases the interaction with the MTC will be inaccurate and variable on a pass-by-pass basis.

    Essentially this is a shortcoming of

    1.  Sample libraries that use real human performances--if the sample libraries were all fully digital the issue of variable sample starts could be resolved fairly easily. 
    2. Outdated synchronization protocols. Proprietary synchronization schemes. 

    And the solutions:

      1. More carefully edited sample sets. Samplers that account for round-robin sample start variables.
      2. An industry to move toward open source (non Dante) audio-over-IP solutions, namely AES 67, and a universal packet-based MIDI protocol. This could reduce MIDI playback variability to a matter of nanoseconds.

  • I compose to  picture all the time with Logic Pro connected to VE Pro. I run the EW Hollywood Orchestra on a slave PC in VE Pro and some Kontakt orchestral instruments in VE Pro on my Mac. All other stuff directly in Logic Pro.

    I have yet to encounter a composition where the  timing inconsistencies affecting my hit points could not be resolved by using either region delays or just selecting all the notes in the Event List and moving them back/forward some ticks.

    But then, I am not striving for "perfect" timing because perfect is the antithesis of human. I just want the timing to be as  good as I used to achieve back when I had the budgets for real players.


  • Many thanks for those very interesting and useful replies.

    Just to be clear, I'm not aiming for 'perfect sync'. I quite agree that this is neither technically realistic nor artistically desirable.

    But I regularly blend articulations from different libraries and it seems reasonable to expect a modern DAW to allow you to set up default 'ballpark' offsets in templates, to bring tracks within an acceptable tolerance and avoid extra sync adjustments each time you use a particularly slow or fast instrument.

    I've recently set up a duplicate template in Cubase and its simple 'track delay in ms' option is one of several areas where it offers a simpler, streamlined workflow compared to Logic's often cumbersome workarounds.

    I've been using Logic since the days of C-Lab Notator, so I'm very reluctant to jump ship, but it really seems to have fallen behind the competition when it comes to large template, multi-library, multi-timbral working.


  • Ashermusic's solutions make a lot of sense. Using the Event List to move events on a tick-by-tick basis is definitely accurate and right on.

    Question for Ashermusic--in your system is dialog and effects audio + picture kept in Logic or is Logic/VEPro audio sent to Pro Tools or another DAW?

    Rogerdp; I agree Cubase seems more efficient overall than Logic with MIDI processing and audio thread handling.

    This may be getting far afield of VE Pro. However since this forum addresses the community of music-for-video folks...

    ...Seems that the idea of "perfect sync" is understood to be both not musically ideal and probably not realistic in DAW systems for the next few years anyway.

    What about playback dependability? Here are a few cases:

    • Every time a DAW session is played back the timing sounds different, confusing the composer and leading to repetetive re-editing of MIDI
    • When a DAW session is opened on a different system, even if it's identical, timing is different, leading to a wide variety of issues
    • The system used for composing is slightly different from the system used for "printing" from MIDI systems to auido files. Timing is different, and trickles down to the final product
    • MIDI regions are nudged around by ticks here and there, making quantize edits more complicated. Also these "nudged in time" regions, since they are no longer aligned with the DAW sequencer, are in the "wrong" place, which is generally confusing.
    • The MTC/Audio sync output from the DAW is unreliable. This is definitely the case with Logic.

    This last issue, when combined with the first four, leads to a multitude of problems. So, better editing of sample starts and the ability for DAW and sampler players to manage sample start variability would be desireable.


  • Picture is kept in logic Pro here.


  • last edited
    last edited

    @rogerdp said:

    In short, it appears that the Track Delay parameter on 'External MIDI' tracks has no effect on the MIDI output. In which case surely it should be fixed or greyed out. I've tried reporting this to Apple, but their Support Forum is currently offline...

    By comparison, Cubase has a simple 'Track Delay in milliseconds' slider - exactly what I need!

    Hi rogerdp, did you ever solve this issue in Logic?  It is still broken as of today, in version 10.5.1, in 2021.