Vienna Symphonic Library Forum
Forum Statistics

194,397 users have contributed to 42,918 threads and 257,959 posts.

In the past 24 hours, we have 4 new thread(s), 6 new post(s) and 85 new user(s).

  • last edited
    last edited

    @Another User said:

    Getting VIs and DAWs to agree on a system for directions too, when so many don't have any representation of the concept, seems like a dream never to be realized. However, if VIs could let you control what gets expressed as sound variations (articulations), you could limit that to proper per-note articulations. The number of other directions (controlled via e.g. CCs or automation) is usually small - bowing style, vibrato, muting etc, and most DAWs have reasonable ways to set and chase CCs.

    I think you might be conflating several things here.  Cubase expression map grouping works regardless of whether you are using ATTRIBUTE or DIRECTION.

    DIRECTION and ATTRIBUTE both have their separate pros and cons, but grouping is not reserved only for DIRECTION.


  • last edited
    last edited

    @Dewdman42 said:

    I think you might be conflating several things here.  Cubase expression map grouping works regardless of whether you are using ATTRIBUTE or DIRECTION.

    DIRECTION and ATTRIBUTE both have their separate pros and cons, but grouping is not reserved only for DIRECTION.

    I do understand Cubase expression maps well, and did not say all groups have to be directions. I'm saying you do not get out of combinatorial explosion of articulation entries until you have an independent notion of directions. Consider: you have staccato, spiccato, detache and long, 4 artics. Then you add a mute group with 2 entries normal and con sordino. You now have 8 (4x2) entries in a Cubase-style map. You add a vibrato group with norm, senza and molto. Now you have 24 entries (4x2x3). Direction or not, there's a multiplication going on. 

    Now consider having proper notation-style independent directions. You still have 4 artics, but could add 2 independent mute direction states and 3 independent vibrato direction states for a total of 9 things you have to say (4+2+3)

    Dorico had the same problem as Cubase, whose expression maps they copied at first. I documented the problems here:

    Subsequently they added Add-on Techniques in Dorico 3.5, moving away from the Cubase model:

    "and improved handling for techniques that can be combined with other sounds, without needing to define every possible combination in the expression map"

    What Cubase groups are missing is an association between the group and the output messages associated with that group. All output messages for an entry are just an undifferentiated set. This is not like Synchron Player, where dimensions not only group a characteristic together, but also independently specify how to select the choices within the group, without regard to combinations.

    tl;dr - Cubase groups do not solve the combination problem. When I described that to the Dorico devs they improved Dorico by making some groups/dimensions independent. Synchron Player similarly needs to let us specify which dimensions are for per-note articulations and which we want to control independently of the sound variations system.


  • last edited
    last edited

    @Another User said:

     Synchron Player similarly needs to let us specify which dimensions are for per-note articulations and which we want to control independently of the sound variations system.

    I don't understand this suggestion either, maybe you can try writing it another way, I am genuinely interested.


  • I'll also say for the record that I am disappointed with the direction things are going with PreSonus coming out with Sound Variations, a feature which I think is overly simplistic and short sighted about certain important things that need to be accounted for, you are bringing up one of them DIRECTION vs ATTRIBUTE, for example.  But also no concept of groups.  no concept of per-articulation latency offsets, etc.  Presonus came out with what I consider to be an inadequate API for this kind of thing and asked everyone to standardize on it.

    VSL got excited about it and did so, which got a lot of good press and excitement, but still in the long run I feel its short sighted to have jumped on this PreSonus API in the form it currently is.

    Now MOTU has also jumped on this bandwagon.

    I'm afraid we are going to be stuck with a lowest common demoninator articulation management system for years to come, with S1, DP and any instruments using that API.

    (off soapbox)


  • last edited
    last edited

    We're definitely crossing our signals here. What you have been talking about is the UI within the Cubase MIDI editors for indicating which articulation is to be used, where you can orthogonally talk about the directions vs the per note attributes.

    I am talking about the expression maps themselves, because that is what gets transferred from VI to DAW via sound variations.

    So let's take a single case of something in Synchron Player and how to talk about it in Cubase vs Dorico. Dorico in their 3.5+ maps offers more of what I am talking about.

    Let's take a theoretical SP instrument with the artic and directions I described. In all cases there are 24 patches, I am talking about the definition of the control plane.

    In SP it looks like this:

    Artic (C-0)  Vib (CC1)   Mute (CC4) - note how the control per dimension is declared

    ===================================

    stac (C0)    Senza       Norm

    spic (C#0)   Vib         Con sord

    det (D0)     Molto

    long (D#0)

    9 entries. Note that e.g. how Vib is controlled is explicit (CC1). To change vib you only need to send CC1 and it 'sticks' until you send something else. This is an independent idea from any DAW/Notation and its editing UI - it's the control plane for the VI.

    In Dorico it looks like this

    Artic

    ===========

    stac (C0)

    spic (C#0)

    det (D0)

    long (D#0)

     

    Vib (add-on)

    ==========

    Senza (CC1 - 0)

    Vib (CC1 - 64)

    Molto (CC1 - 127)

     

    Mute (add-on)

    =============

    Norm (CC4 - 0)

    Con sord (CC4 - 127)

    again, 9 entries. The add-ons correspond directly to the dimensions. Dorico knows that if Vib changes in the score it needs to send CC1. Whether or not it sends this with every note is its problem, not yours.

    In Cubase it looks like this:

    Artic   Vib       Mute      - Messages

    ========================    =======================

    stac    Senza     Norm      C0+(CC1 - 0)+(CC4 - 0)

    stac    Senza     Con sord  C0+(CC1 - 0)+(CC4 - 127)

    stac    Vib       Norm      C0+(CC1 - 64)+(CC4 - 0)

    :

    20 other entries...

    :

    long    Molto    Con sord   D#0+(CC1 - 127)+(CC4 - 127)

    24 entries. Tons of redundancy. Worst though is that e.g. Vib is not an independent first class idea - there's no indication which of the messages represent Vib. Vib is just something that can participate in a combination, and only then, to generate a ball-of-mud set of messages. But vib was an independent idea in SP, so this is lossy. Dorico add-ons are strictly better.

    Cubase's maps are not good because they are only combinatoric. The Cubase MIDI editing UI (I'm not talking about the emap editor) could just as easily be used to control maps that had first-class directions, because that is exactly what Dorico does via notation and add-ons.

    @Another User said:

    Synchron Player similarly needs to let us specify which dimensions are for per-note articulations and which we want to control independently of the sound variations system.

    I don't understand this suggestion either, maybe you can try writing it another way, I am genuinely interested.

    What I'm advocating for SP is that it let me tell it to only use, in this case, the Artic dimension for sound variation sync, meaning it will send a map with 4 entries, because I am going to use CCs or automation lanes to control Vib and Muting.

    There is no need to have a shared "group" idea in the sound variations API because that will just lead to more copying of Cubase in its current limited state (which Dorico did and then backed away from), and dictates too much to implementations. Sound variations are ok for per-note stuff.

    In practice, for most Synchron libs I'd only want the Articulation and Type dimensions to generate a sound variation map. I just did this by hand for Elite Strings and the map has 42 entries (instead of the 400+ it generates automatically) - perfectly great for DPs artic selection menus and editors. I control Vib, Attack, Release etc with CCs - they chase, and fewer messages are sent per note. In Logic it's trivial to make a Scripter UI which gives CC values names.

    Still WIP, but I'm happy to share if you have Elite Strings


  • last edited
    last edited

    @richhickey said:

    We're definitely crossing our signals here.

    apparently still....  sorry but you are just incorrect about cubase.  But I'm tired of this discussion.  Try the things I suggested earlier if you like and my emz program may help you set up Cubase expression maps without combinatorial explosion

    But both DP and S1 will suffer from the combinatorial explosion when using Sound Variations.


  • last edited
    last edited

    @richhickey said:

    We're definitely crossing our signals here.

    apparently still....  sorry but you are just incorrect about cubase.  But I'm tired of this discussion.  Try the things I suggested earlier if you like and my emz program may help you set up Cubase expression maps without combinatorial explosion

    But both DP and S1 will suffer from the combinatorial explosion when using Sound Variations.

    Sorry to hear that. I enjoy taking with you about these issues since we seem to share many of them. I'm a programmer and can and have written programs like your emz, but I think the need for it highlights the shortcomings of Cubase. When you say "the editor is lame and requires you to retype a lot of sound slot rows to account for all possible combinations in the actual sound slots" you are again highlighting the problem with the expression map architecture itself. Even if you use a program to help you generate it, the map is still one of combination->bunch-of-messages and has no sense of which messages correspond to which values in a particular dimension/group.

    But the important thing here is not a "what if?" about Cubase or the Sound Variations API. The Sound Variations API exists, it works the way it does, and it raises the question as to how Synchron Player will interact with it.

    I maintain the initial premise that Synchron Player, by naively exposing the multiplicity of all dimensions to the SV API, is creating a combinatorial explosion that does not exist in Synchron Player itself.

    At least it seems obviously wrong to send combinations with dimensions assigned to Speed or None, since there are no messages in the SV API that can control them.

    To be perfectly clear what I am talking about, here is the articulation picker menu generated by default from Synchron Player for Elite Strings. The image is just the first of 5 pages of menu!

    Now here is the entire menu generated from just the Articulation and Type dimensions. Since I use CCs/Velocity/Speed for things like vib/attack/release/agile I don't want them in the sound variation system. That control is what I am asking for.


  • The cubase problem is only a problem due to its editor which makes it laborious to manually edit all the sound slot rows, but having a hundred sound slot rows in the expressionmap editor is NOT A PROBLEM if and when it is using groups that will reduce it to only a dozen rows in the actual piano roll lane. Despite this laborious work flow of editing all those sound slots; there are some advantages to having every combinatorial sound slot listed out in the editor, which is that you can go edit every explicit combination to have whatever key switches you want. That’s how I am able, for example, so have a non-mute default encoded into the sound slots without using up an additional row in the piano roll. I have heard of some other creative uses as well. So it’s not entirely a bad concept but it’s just rather unwieldy the way they have designed the actual exmap editor to redundantly enter the same keyswitches over and over again while creating all the combinatorial sound slots. Listen I mainly use DP, not Cubase. I only mentioned Cubase because I feel that the other daws need a grouping feature in order to avoid displaying a buzzillion articulation rows as you are complaining about. I feel that is an important articulation management function that could have been implemented better in cubase’s editor and in my opinion should be in some kind of form in all daw articulation management systems, and expanded into the sound variation api in some fashion likewise. This is how the many situations of combinatorial explosion can be presented to the user in a much more sane way. Either that or sound variations should always send the explosion and leave it to the user to group all those Internal sound slots into a smaller “grouped” list of articulations however the daw is able to do it, which unfortunately DP and S1 currently can’t do anyway. Aside from that I hear what you are saying about wanting the ability to control from synchron which dimensions or dimension rows should be included in the sound variation export, as a way to reduce the combinatorial explosion. Hide some of them basically. For now it’s probably easier to just export it once, get the explosion then go to DP and delete all the combinations you don’t want and then edit the articulation map in Dp after that to avoid having to do that again. Honestly I feel the sound variations concept is not fleshed out very well by preaonus, it’s a nice overall concept but there is a lot more complexity then I think they have realized and opens a big can of worms for more complicated scenarios. It works fine for simple scenarios but it’s not really saving much time for those scenarios either it’s just adding complexity to a system to avoid a user having to manually create an articulation map with a dozen rows that would take all of 10 minutes to build it by hand the first time.

  • last edited
    last edited

    If anyone wants to try this, and has both DP11 and Synchron Elite Strings, this folder contains slightly re-arranged presets for Synchron Elite Strings (mostly normalizing the treatment of Vibrato, which in the defaults was sometimes a Type but more often its own dimension), as well as an articulation map file for Digital Performer 11 and up. The same articulation map works for all the sections, enabling copy/paste between tracks keeping articulations.

    This map has entries only for every combination of Articulation and Type, both of which are driven by keyswitches at the low end of the range. The rest of the dimensions (vib/attack/release etc) are controlled by CCs, velocity or speed.

    This yields a manageable list of 42 entries, which works great in the articulation picker menus and UI. By default velocity controls attack type, CC1 controls vibrato type, CC3 controls release type, CC20 controls slot XF and CC2 controls velocity XF. You can easily change which controllers are used in the Control tab of Synchron Player.


  • last edited
    last edited

    Is there an option to add sounds from the outside to the melody in this application? I dream about recording a track that has been spinning in my head for a long time, but I don't quite understand how it's all settled. The only thing I have already done is to find the commercial use sound effects library, from where I will take some sounds with which I will dilute my melody. But I have no idea how to build the melody itself. Please tell me some suitable applications for beginners; I am terrified that I will forget this sound in my head and will not have the oppotunity to realize it in life