New samples and platforms are exciting, but anything that relieves deep, datailed, endless keyswitching would be genuinely ground-breaking.
this sounds nice, but how could an algorithm “know” if you want to play a staccato or a short detache (not to mention more fancy stuff like spiccato, harsh, …)? The information how the attack phase should be played and how long the note should sustain is simply not available at the moment a mere note-on message is received and at which the algorithm would have to pick an articulation. The algorithm could only use the previously played notes to “guess a likely articulation”, but this would strongly restrict our freedom and is definitely not something I would want (or it should at least be possible to disable such a feature). Because with a given algorithm one might either be able to play Haydn or Stravinsky, but hardly both 😉.
They mention continuous controllers, which could surely be very useful if implemented properly (in particular if they would "morph" different articulations), but this is just another form of manual articulation selection ... and it would be nice to know what exactly they do.