Vienna Symphonic Library Forum
Forum Statistics

200,346 users have contributed to 43,195 threads and 259,072 posts.

In the past 24 hours, we have 0 new thread(s), 9 new post(s) and 70 new user(s).

  • ps - like I said, I haven't tried to measure my VSL instruments to see how much attack latency there is, or if its different per articulation, etc.  But a product suggestion for all sample developers, including VSL; would be to have a way to have the instrument report to the host a consistent amount of overall latency and then automatically adjust the attack time of each articulation relative to that, so that they all sound exactly the same amount late....and the Plugin Delay Compensation of the host can then bring them all back early again...and wala.....they'd all sound as desired...on the grid.  To my knowledge, no sample library does this, but I would be supremely impressed to find out if someone does.


  • Very good food for thought, Dewdman.  Thanks so much for taking the time to explain all of that to me!  I think you just saved me a lot of frustration and trial-and-error in the coming years.  I'll keep all that in mind as I experiment and practice MIDI sequencing.

    Cheers!

    - Sam


  • last edited
    last edited

    @Seventh Sam said:

    Do you ever run into situations where you want to score, say, an ultra fast and complex run that is too hard to simply play in on a keyboard and, since everything is off the grid, you end up spending way too much time fussing with individual notes to get it to sit right with the unquantized rhythm?  I would be concerned about that kind of thing with a pure off-the-grid approach (I'm not extremely proficient at piano nor is my MIDI controller very, eh, ergonomic).

    So, this kind of gets into personal philosophy about programming.  Personally, I view the computer and MIDI controller as an instrument to be technically mastered.  The ultimate proficiency should be a guy like Daniel Barenboim, a guy who can conduct/rehearse a top orchestra and also sit down and play the entire Beethoven piano sonata cycle.  Except in the case of "programming" your "conducting" is laying out the session and your performing is your fingers on the keyboards and mod wheel.

    That's the goal, but obviously not everyone is going to get to that Daniel Barenboim level of computer/MIDI controller mastery.

    A few tips to tackel technically difficult passages:

    1. The most important thing is the RHYTHM of the passage.  If you can get the feel of the gesture, but miss most of the notes, just go in and arrow up or down the notes to the correct pitches.
    2. Think about which notes of a gesture should be emphasized.  If it's a scale, generally it's last note, especially if it ends on a strong beat.  Adjust the velocities of velocity cross-fade so there's a little crescendo in there.
    3. Use two hands and go back and ride the velocity cross fade in a second pass.
    4. If you are using VI Pro, search through some of the pre-made scales in the APP Sequencer.  Often times you can adjust the notes in the sequencer and it still sound really convincing.
    5. If there is a percussive hit at the end of the run from other instruments, edit the midi so that everyone is playing really close together!

    Check this video out: 


    Notice how he gets her to not play so "correct."  On the page, those notes read as equal durations.. but he wants extra space here, hold a note longer there, more crescendo, etc etc.  I think this mentality should be applied to programming.


  • last edited
    last edited

    @Seventh Sam said:

    I do have VI Pro, so no worries there.  You mention that you don't always quantize.  When you do, is it to correct errors in your playing or to keep everything in a strict rhythm?  When you play, are you consciously emulating the fluid rhythm inherent in orchestral conducting or are you playing with the idea that you're going to quantize to a strict grid at the end of the day?

    Depends on the size of the ensemble.  If it's, say a string quartet or quintet, then I won't quantize anything.  Also, I'm a pianist by training so anything I do on the piano is never quantized.  But if it's a large ensemble piece like a 100 piece orchestra then yes I quantize because even slight rhythm inconsistancies will be amplified and the human ear will detect it.  It's the fluctuations in tempo, like slowing down slightly at the end of melody line which players do naturally, that make the piece sound more natrual.


  • last edited
    last edited

    @Another User said:

    1. The most important thing is the RHYTHM of the passage.  If you can get the feel of the gesture, but miss most of the notes, just go in and arrow up or down the notes to the correct pitches.
    2. Think about which notes of a gesture should be emphasized.  If it's a scale, generally it's last note, especially if it ends on a strong beat.  Adjust the velocities of velocity cross-fade so there's a little crescendo in there.
    3. Use two hands and go back and ride the velocity cross fade in a second pass.
    4. If you are using VI Pro, search through some of the pre-made scales in the APP Sequencer.  Often times you can adjust the notes in the sequencer and it still sound really convincing.
    5. If there is a percussive hit at the end of the run from other instruments, edit the midi so that everyone is playing really close together!

    Solid gold.  Thank you for taking the time to help me out.  #1 is especially useful; it makes a lot of sense to capture the human rhythmic feel live above all else as other factors (velocity, expression, pitch, etc.) can be easily altered after the fact.

    Another thought occured to me: it may be useful, in cases where semi-strict rhythmic to strict rhythmic accuracy is needed during difficult passages, to briefly turn the grid on - not to snap or quantize to, but to use as a visual guide to subtly adjust potentially sloppy live playing.  Once adjusted, the grid could be turned off and the entire phrase can then be moved around freely and fit to the rest of the music by ear.  Something to experiment with, certainly...

    One more question for you, if I may.  You mention velocity crossfading.  I know there are differing opinions on this, but I'd like to hear yours: do you recommend tying Expression and Velocity X-Fade together on one controller, doing one OR the other, or just using Velocity X-Fade to modulate dynamics?  I know that the timbral shift that occurs in Brass and Winds during crescendos and what not is not simply "raising the volume", but there are noticable jumps in between velocity layers for certain instruments, especially the solo VI libs.  Any recommendations here?

     

    Thanks again!

    - Sam


  • last edited
    last edited

    @jasensmith said:

    Depends on the size of the ensemble.  If it's, say a string quartet or quintet, then I won't quantize anything.  Also, I'm a pianist by training so anything I do on the piano is never quantized.  But if it's a large ensemble piece like a 100 piece orchestra then yes I quantize because even slight rhythm inconsistancies will be amplified and the human ear will detect it.  It's the fluctuations in tempo, like slowing down slightly at the end of melody line which players do naturally, that make the piece sound more natrual.

    Good to know, thank you.  Notes taken, knowledge absorbed.

    - Sam


  • last edited
    last edited

    @Seventh Sam said:

    One more question for you, if I may.  You mention velocity crossfading.  I know there are differing opinions on this, but I'd like to hear yours: do you recommend tying Expression and Velocity X-Fade together on one controller, doing one OR the other, or just using Velocity X-Fade to modulate dynamics?  I know that the timbral shift that occurs in Brass and Winds during crescendos and what not is not simply "raising the volume", but there are noticable jumps in between velocity layers for certain instruments, especially the solo VI libs.  Any recommendations here?

    I never tie any parameters together.  And usually, Expression is used only when I can't get the desired fade with velocity xfade alone.

    I also wanna add another option for dealing with a technically difficult passage, and nailing the rhythm.

    If it is 16th notes, just play the 8th notes, then manually fill in (eyeballing it) the 16th notes.  For example, you have C-D-E-F as rapid 16th notes, play just the C and E, then write in the D and F.  When playing, it might help to still count the 16th notes in your head, but only play on the 8ths.... side effect is that it will help develop your technique and overall rhythm by counting like this.


  • Stephen,

    Thanks again for taking the time!  All your advice is extremely helpful and I will definitely take it into account as I continue to experiment and practice.

    Sincerely,

    - Sam


  • last edited
    last edited

    @Dewdman42 said:

    ps - like I said, I haven't tried to measure my VSL instruments to see how much attack latency there is, or if its different per articulation, etc.  But a product suggestion for all sample developers, including VSL; would be to have a way to have the instrument report to the host a consistent amount of overall latency and then automatically adjust the attack time of each articulation relative to that, so that they all sound exactly the same amount late....and the Plugin Delay Compensation of the host can then bring them all back early again...and wala.....they'd all sound as desired...on the grid.  To my knowledge, no sample library does this, but I would be supremely impressed to find out if someone does.

    Audio Imperia Nucleus does this (but not via PDC). It knows the 'sync point' of every sample and ensures that all samples have the same offset to the sync point, so you can plug a single negative MIDI timing adjustment to the (multi-articulation) track and get everything tight. For live playing you can turn a knob to temporarily move the start offset to get less latency (and less realism) and better timing and feel, then turn it back.

    It's a brilliant idea and every sample library should have this.


  • It's interesting how differently people work. I no longer use the grid at all, for anything. I find that things sound not locked in, as opposed to super-locked but robotic. This is because of different latency with different instruments (and across their ranges and articulations).

    I have tried the humanize feature, and only like it for the slight pitch and expressive effects as opposed to timing. But once I start moving over more to Finale and Dorico as my STARTING POINT for compositions vs. using them to finesse scores and parts extractions of legacy projects that started as MIDI, I might change my mind on that.

    As it stands, I find Digital Performer to be remarkable in how it handles quantization. So even parts that start as notation get filtered through its quantization functions, with appropriate amounts of sensitivity, emphasis, swing, and randomization, and I find this does wonders, but often requires several passes before finding the best parameters for each part to both breathe and lock in better with other parts in a way that is like a live performance.