@jc5 said:
I am curious to see just how 'intelligent' an 'intelligent' instrument can be made at this point in time.. while I would value this as much as everyone else, I'm not sure that I'm willing to give up control of my articulation choices unless the system really works beyond what I've been imagining...
I don't know if VSL is doing this, but the need for instruments or tools (whatever you want to call them) to be able to have some awareness of musical knowledge structures has been pressing for some time now. When they finally come, such tools will seem like a great advance to us. Unless we remember that software tools that deal with text have had capabilities such as word search-and-replace for over 20 years, while we still don't have such a thing for, say, a musical motif across all tracks, etc. It's sad that we have to get 'excited' about software development in music-tools moving from the iron age to the bronze age.
I doubt very seriously that much will change in the performance tool area with whatever is coming on Nov. 26. I'll be very, very happy to be wrong, of course. It's just that I think commercial demands for tools to work equally well in real-time as they do in playback are hampering devolpment in this field. Even Eric, the Synful developer, has run into this mentality. If the developers could ignore the absurd requirement that their tools must work during real-time keyboard playing, then they could make huge advances and contributions. For example, if the tool were run "post playing," it would be trivial to analyze data, determine exact note lengths, whether crescendo and diminuendo is happening, etc., etc. and then swap in the most appropriate samples for those musical situations.
Going outside of VSL's focus ... imagine music processing tools that get away from the "signal processing" mindset and focus instead on musical structures. Then imagine being able to select all instances of a theme in an entire MIDI file, alter the second note of the theme, transpose the theme, assign it to a specific instrument, modify its loudness ... or whatever. Woud be cool, no?
Imagine a tool that knows harmonically where the expansion of a phrase reaches it's highest point and then can adjust the loudness layers of the sample instruments (to use EXS terminology) to build energy toward that point, and resolve it away after it. (Or, you could have it render the inverse of that ... increasing energy toward the cadence. You could do whatever you like, once the tool can actually deal with the data in musical terms.) You could click "render phrasing" with a slider giving you the scale at which to do so. And the underlying "best possible sample choice" engine from VSL would select and employ the best sounds to fit the musical situation. Of course you'll be able to tweak anything "manually" as well, just as you can now.
But, my point is that, when you compare tools such as PhotoShop and AutoCAD (not to mention plain old grep) to what we have to work with now, it's clear that music performance software tools fall far short of where they need to be.
Somehow, though, I think the collective desires expressed already in this thread far surpass what the VSL will be able to deliver this month. Such is the danger of "hyping" a crowd of highly imaginative and passionate people! Nevertheless, if the VSL can pull another rabbit out of a hat and address even some of these tired old issues, I'll certainly step up to the plate and buy it!
- Paul