Vienna Symphonic Library Forum
Forum Statistics

201,288 users have contributed to 43,239 threads and 259,234 posts.

In the past 24 hours, we have 3 new thread(s), 16 new post(s) and 61 new user(s).

  • Expression Controller.

    Hi to the forum,

    I have been studying quite carefully the demo of Jay Bacal's Pavane by Faure. Jay shows very careful and very detailed and intricate use of the expression and velocity crossfade  controllers to  shape the phrases in his  music, plus also his use of tempo, which gives a very  musically sensitive performance.

    I do understand - well, please correct me if I am wrong here - that the velocity crossfade is used very effectively to control the volume, and the timbre of the instrument. In that I mean, that it appears to me, that as each velocity layer as it was recorded in the Silent Stage has a different volume, and it also has a different timbre for each layer, so as you move the xfade up or down, you get a realistic sound that gives the volume and the timbre associated with each layer.

    I am just not sure though, what the expression fader does - even though I use it. Does it actually control just the volume, or is there something else that it does also?

    Thanks if anyone can let me know.

    best,

    Steve. 


  • Steve Martin, Thanks for asking this I have the same question in mind! I'm eager to read an answer. Alain LeBlond

  •  Hi Steve, hi Migot,

    the Vel Xfade crossfades through different recorded velocities - which means it does not only "change volume" but also gives you the expressive sound of each velocity layer we recorded for each patch.

    Best,

    Paul 


    Paul Kopf Head of Product Marketing, Social Media and Support
  • Velocity is soft to harsh .. of the sound ... expression is the volume. Easy :D

  •  Hi Paul and Hetoreyn,

    Thanks for the answer. It means I now understand that controller details so that I can make better decisions about achieveing expression with my playback.

    best,

    Steve. 


  • Well with increasing degrees of "harshness" (i.e., timbre acheived by blowing/bowing, stricking, etc.) naturally comes volume, so the two are inextricably linked, unless all samples were normalized to the same output level (i.e., a flute sample triggered by a midi note at 25 has the same volume as a flute sample triggered by a midi note at 97, but exhibiting a different timbre).  My understanding of Paul's comments is that the two were separated via programming and that the two sound characteristics remain linked