Vienna Symphonic Library Forum
Forum Statistics

194,422 users have contributed to 42,920 threads and 257,965 posts.

In the past 24 hours, we have 4 new thread(s), 10 new post(s) and 79 new user(s).

  • Further Considerations on Volume/Expression

    [The "Conclusion" that was previously posted here in this thread has been updated and moved to the new thread titled "Conclusion on the Volume/Expression Matter" - which was previosuly the title of this thread.]


  • last edited
    last edited

    @Another User said:

    •   There is no technical necessity for being mindful of which fader is placed after the other in the audio path.
    •   The combined effect of these two faders is always simply the sum of their individual dB attenuation amounts.

    There is a techical necessity, and no, it's not the same as soon as effect plugins in the Player's mixer are used.


    Ben@VSL | IT & Product Specialist
  • Thanks for the info.

    Is there a chance you will add a increased output level from the main volume? I routinely have to add trim plug-ins after the Synchron player to match output levels from most of my other VIs like Kontakt, u-he Zebra, Spitfire, etc. In Kontakt, for example, you have the option to increase the main output level as much as 36 db!


  • My reply to this thread has been posted already, found here: [url=https://www.vsl.co.at/community/posts/t57921findunread--Deleted#post305284]https://www.vsl.co.at/community/posts/t57921findunread--Deleted#post305284[/url] Macker didn’t like my response for some reason and moved his original post to a new thread. He did that twice actually. I’m not sure why. Anyway Ben I would love to hear more specifics about what exactly CC11 does above and beyond attenuation, if anything; and also any specifics about how and why the ordering of cc7 and cc11 matters. Particularly if there is any processing that happens in between them; or any other factor. Thanks for contributing! [i]Edit: I just realized that you already explained that cc11 applies attenuation pre synchron mixer. Cc7 is post mixer. Makes perfect sense. The mixer itself is also a gain stage which could be used for addressing level balancing @garylionelli but it doesn’t seem to have a master fader unfortunately, so I probably would not mess with that myself [/i]

  • last edited
    last edited

    @garylionelli said:

    Thanks for the info. Is there a chance you will add a increasedoutput level from the main volume? I routinely have to add trim plug-ins after the Synchron player to match output levels from most of my other VIs like Kontakt, u-he Zebra, Spitfire, etc. In Kontakt, for example, you have the option to increase the main output level as much as 36 db!
    I second this, and actually I would prefer that cc7 is not in any way attached to it. It’s too easy in many daws to nudge a midi track fader and destroy a level-balanced orchestra when cc7 is being used for the balancing as opposed to a disconnected volume level that is saved with the preset and not controlled by cc7. With ability to push +db gain. Or maybe a master fader in the synchron mixer could provide this!

  • Ben, thanks for your information, I didn't know that. 

    I never use the effects available in Synchron Player. I prefer to use my fairly extensive collection of VST/AU Fx plugins, which I've carefully curated over many years and know and understand well and can use with ease - and of course they can't be plugged into Synchron Player. I can't imagine I'm unique in that way among Synchron Player users. Even so, that's no excuse for me not knowing about the signal flow you refer to.

    But yes, that can indeed be an exception to the interchangeabilty of the two faders, but only in the case of using level-dependent VSL Fx in Synchron Player's Mixer, such as compressors, noise-gates, saturators, limiters, etc. For all other, non level-dependent VSL Fx, I hope you'll concede that interchangeabilty still holds true.

    I'm grateful for that knowledge on Synchron Player signal flow; many thanks, Ben.

    Hope to see one day a simplified signal flow diagram of the Synchron Player in the official documentation!


  • BTW, I've seen some extraordinarily weird, wrong and just plain stupid stuff being asserted or assumed in connection with using Vol and Expr faders. For example - the logarithmic scaling of these faders is irrelevant - hahaha! Good luck with that but don't expect me to respond to anyone who has tried to purvey such tosh 'authoritatively'. (That example belongs in the same cesspool as the drivel dished out - just as 'authoritatively' - by those notorius Asian "IT support" scammers. Would you give them even the time of day? lol.)

    Incidentally, I'd love to see schools and universities teaching courses on the topic of ignorance. I think Socrates would approve heartily! We're all morons, if you get right down to it, but the essential trick is knowing that, and dealing with it carefully in everyday life! Lolol


  • last edited
    last edited

    @Macker said:

    Ben, thanks for your information, I didn't know that. 

    I never use the effects available in Synchron Player.

    If you use SYNCHRON-ized instruments it's definitly relevant as well, because changing the volume will also influence the reverb tail (if you use the included IR), while changing Expression changes the volume before the IR engine stage.
    For this reason I highly recommend using Expression and avoiding Volume automation (of course, if necassary you can do that, but you will not have to think about cut off or boosted reverb tails at all).


    Ben@VSL | IT & Product Specialist
  • Ok yes; thanks, Ben. Understood. I'd guessed that would also be the case. It so happens that I don't use any IR or algo verb at all inside Synchron Player. But of course I do see that signal flow must be considered for those cases, taking into account the different effect each fader could have on reverb tails when automated dynamically.

    I've now edited my "Conclusion" post above to take into account the vital and helpful information you've provided on signal flow through the Synchron Player. Thanks again for your help, Ben.


  • GaryLionelli, I second your comments about level.

    I too have often felt somewhat starved of level from Synchron Player, and like you, sometimes add a gain trimmer after the player. I recall being surprised and disappointed the first time I looked for a gain trimmer in Synchron Player's internal Fx devices and discovered there isn't one. I've no idea why there isn't.

    I propose that VSL add a simple gain device in the Utilities section of Synchron Player's internal Fx.


  • The Power Pan plugin under Utilities has a gain knob with up to +12dB.


    Ben@VSL | IT & Product Specialist
  • Ah ha! Thank you Ben.


  • last edited
    last edited

    @Dewdman42 said:

    I would prefer that cc7 is not in any way attached to it.

    You can assign to the Master Volume fader any other CC, or even nothing if you prefer it to be entirely disconnected and protected.

    Paolo


  • last edited
    last edited

    @Ben said:

    The Power Pan plugin under Utilities has a gain knob with up to +12dB.

    Is there a master bus in the mixer that I am missing?

  • last edited
    last edited

    @PaoloT said:

    You can assign to the Master Volume fader any other CC, or even nothing if you prefer it to be entirely disconnected and protected.

    Paolo

    Excellent idea thank you!

  • last edited
    last edited

    Regarding the response curve of cc7 and cc11 sliders, I believe they might be responding a little differently than most people are used to with typical daw audio faders and automation lanes, mainly due to the way automation lanes are using DB as their vertical Y axes.  Its not entirely clear what kind of actual attenuation calculation is being used by Synchron for CC values 1-127.   I still think it's most likely a simple amplitude percentage that Synchron itself is applying, but Vsl will have to clarify in order to know for sure.

    Typical daw mixer fader automation lanes use dBFS as their Y axis, which is not a linear scale; but anyway if you draw a linear ramp on a non-linear db based automation lane then the resulting amplitude curve will actually be logarithmic in nature by virtue of the fact that the Y axis of that automation lane is not linear., even though the line on the lane is a straight linear line  It is being expressed in DB's, which is a non-linear scale, imposed by the DAW...

    Midi cc automation lanes are not based on db for the y axis so the DAW itself is not adding any curve by virtue of that. It will depend on what synchron does internally for its attenuation calculation when given a linear midi cc ramp; as to how a linear ramp drawn in a midi cc lane is translated to actual gain reduction, ie, the response curve.  Macker has hinted to this in another thread.

    My own testing with external meter has shown very similar results that Macker showed on another thread.  

    There is a standard calculation for converting amplitude to a dBFS value, which is:

    @Another User said:

    dBFS = 20 * log10 (amplitude percent)

    This calculation will take a totally linear amplitude percentage ramp line...and compute to a curved dBFS line, in terms of how DB's are expressed, which is a logarithmic scale itself.  This calculated curve is very similar to the results Macker posted on one of the other threads, and matches my own measurements also (using external meter), however, the curve is not quite exactly the same.  Its curved the same direction, but it's hard to compare the results between theoretical calculations mentioned above, and the actual measured results...since reference level is an important part of DB calculations.  

    The theoretical curve, based on a linear ramp of amplitude percentage, calculated as a dBFS measurement with 0db as the reference level; looks like the attached jpg.  that similar curve is how I have found cc7/cc11 faders to respond when measuring the results with external meter...and also mixer volume faders have a similar curve..but the exact humpiness (for lack of a better word) of the curve is different in real world vs theoretical, most likely due to the real world measurement being based on a non-zero reference level. 

    I have found that the bottom 1/3 of the CC sliders range are basically useless because they attenuate the signal so much and so quickly, at already very reduced levels so as to be mostly meaningless.  So the true useful range of these sliders is really between roughly 50-127.

    I personally think most people are not really paying attention to the DB readings while riding their CC11 control for expression, they are using their ears and affecting expression.  So I'm not sure how "dire" need there really is to know what the actual DB response curve is.  I think most likely it is a simple percentage being applied to the midi CC scale, which is linear in nature..and that displays itself in dBFS measurements as a logarithmic curve.  Due to the humpiness of this curve, the bottom-50 midi increments are probably not useful most of the time.

    Regarding VelXF, just playing it a few minutes shows that it is using more intelligence then the CC7/CC11 response curves, to mimic a typical response that is more fitting for each instrument, with the full range of 128 midi cc values effecting the dynamic range in a much more realistic window that each particular instrument is capable of.  In my mind, this is the most intuitive control to use for dynamic expression of Synchron instruments. it has the greatest usable CC range, not to mention it affects the timbre in realistic ways.  

    I feel CC7 should at most be used for setting orchestra level, set it and forget it.  CC11 can be used for fine adjustments after having used VelXF first.  Even better in my mind is to leave CC11 alone and use mixer automation for these kinds of fine adjustments, which has more resolution anyway.


  • Paolo, I hope you find this conclusion to be correct, even if perhaps not exactly resolving your original issue that sparked off the whole debate. I'd be interested to know what you think of it.


  • I think it's already safe to deem the Conclusion, as defined at the top of this thread, valid. So I'm now treating this thread as closed. Job done.

    Any further explorations or elaborations of the theoretical and practical bases from which this conclusion was drawn are outside the scope of this thread; they belong in the earlier threads that address those topics, or in new threads.

    In the event that a paradigm shift (à la Tom Kuhn) pops up in audio engineering theory (such as, say, logarithmic-law volume controls suddenly becoming irrelevant and inappropriate - well hey, the Weber-Fechner laws in psychophysics, on which all volume faders are based, are 161 years years old so surely they can be canceled, right? lololol), or if (no. seriously) an actually relevant and significant conceptual error is revealed during the course of further discussion in those relevant threads, I will then of course update the conclusion here accordingly.

    (I'm thinking of future readers who want quick, direct and reliable facts and answers without having to trawl through pages and pages of discussions in several threads on the topic. Let's be considerate. This isn't Twitter. I don't want to have to move this thread yet again just to clean off reams of unwelcome and totally irrelevant 'doggy-decoration'.)

    My special thanks to Ben at VSL for revealing to me a key aspect of Synchron Player's internal signal flow.

    Now I'm off to catch up on my projects.

    Ta-tas.


  • Sorry, I have to apologize: I have talked to our devs regarding this topic, and seems like I have remembered this one wrong:

    Expression and Volume control the output of the channels, it will simply get multiplied (on a scale of 0.0-1.0).

    Best, Ben


    Ben@VSL | IT & Product Specialist