"The whole point of an "intelligent" front-end is that it no longer needs to load a complete 'sample map' every time a new articulation is required. Rather, it is only necessary to load the sample for the desired articulation at the desired pitch."
I understand. I presently can't conceive of intelligence so powerful, it can interpret MIDI live and access any one of the numberless VSL arts, then *load* them, then play them back without latency, even if it is only one note and not the entire map. I'd think at least the head of the sample would need to be in RAM still.
But here, I'm speaking about "live" playback. And your observations make me wonder if we're looking at a post facto playback. Could this be the "sequencer" seen in the tease campaign, and have we broached the world of "look-ahead" or "second pass" interpretation? Give VSL one look at the music, and it would then decide how to perform it (or otherwise equip the user to make some better choices).
The alternative would be re-synthesis, or wave form generation. I don't think that we'd be seeing a Symphonic Cube of these mammoth proportions if Synful-like technology was being introduced.
At the end of the day (or two days, at this point), I still think we're going to get a sample-triggering program from VSL. It will analyze MIDI data, sure enough, but only to the end of playing back the right sample. As such, the core difference between the two technologies remains (or three, as I'm sure we'd happily consider your work, jbm, in this emerging embarrassment of riches).
I understand. I presently can't conceive of intelligence so powerful, it can interpret MIDI live and access any one of the numberless VSL arts, then *load* them, then play them back without latency, even if it is only one note and not the entire map. I'd think at least the head of the sample would need to be in RAM still.
But here, I'm speaking about "live" playback. And your observations make me wonder if we're looking at a post facto playback. Could this be the "sequencer" seen in the tease campaign, and have we broached the world of "look-ahead" or "second pass" interpretation? Give VSL one look at the music, and it would then decide how to perform it (or otherwise equip the user to make some better choices).
The alternative would be re-synthesis, or wave form generation. I don't think that we'd be seeing a Symphonic Cube of these mammoth proportions if Synful-like technology was being introduced.
At the end of the day (or two days, at this point), I still think we're going to get a sample-triggering program from VSL. It will analyze MIDI data, sure enough, but only to the end of playing back the right sample. As such, the core difference between the two technologies remains (or three, as I'm sure we'd happily consider your work, jbm, in this emerging embarrassment of riches).