William,
I received that notice as well, and was intrigued enough to check it out. Actually, it's not entirely accurate to call it synthesis -- it's a form of sample resynthesis, or synthesis by analysis. It does use a store of samples as "source material" and, like VSL, that source material consists of sampled performances. The only difference is that the sample set is translated into resynthesis parameters, rather than being used directly. Now, because of this translation, it's possible to move through the "sampled" material in a non-linear fashion. So, the midi analysis part of the program basically looks for the best "match" in the sample-set to recreate what's being input via midi.
I actually freaked out a bit when I saw the details because it's eerily close to a proposal I "pitched" to a few companies a couple of years ago -- right down to using additive + noise for the resynthesis, and using comlete recordings of concert works for the sample-set (as opposed to individual notes). Pretty wild. But then, the developer has been working on it for eight years, so I only managed to be six years behind the times! [;)]
Back to your question. As Christian (1) pointed out, things are certainly moving in the direction of some sort of hybrid. It's just a question of what that hybrid will be, and how it will manage to bridge the gap between the sample and its resynthesis. Really, the only form of "ground up" synthesis that stands a chance is physical modeling, but until someone comes up with a feasible control structure, it will not likely find a place in the "orchestral" market. A sort of middle-ground can be found in the program Melodyne, by Celemony software. This program can perform some real miracles, in that it can separate out pitch, time, and formant -- it can even flatten out vibrato! If you mess around with Melodyne for a bit, you'll instantly recognize the potential for composers like ourselves. The key bit of "magic" lies in the ability to provide subtle variation to a given sample's pitch, vibrato, duration, and amplitude dynamically. This is what will really bring a sample library to life (okay, there's plenty of "life" in VSL, but you know what I mean!) There is little doubt in my mind that this is the direction in which VSL is moving. Remember this little statement?:
"Revolutionary software developments in the areas of music production and education will significantly affect the work of music creators. In the near future, quick access to samples will no longer be a question of RAM or hard-disk streaming, but will rely on a completely intuitive connection between the composer's sonic vision and the sounds required to achieve it."
Notice particularly the part about RAM and hard-disk streaming... These factors can only truly be transcended using some sort of sample resynthsis (e.g., the software that inspired this thread packs an entire orchestra into 40 MB). Again, it's just a matter of which technique is chosen, and how it's implemented.
J.
I received that notice as well, and was intrigued enough to check it out. Actually, it's not entirely accurate to call it synthesis -- it's a form of sample resynthesis, or synthesis by analysis. It does use a store of samples as "source material" and, like VSL, that source material consists of sampled performances. The only difference is that the sample set is translated into resynthesis parameters, rather than being used directly. Now, because of this translation, it's possible to move through the "sampled" material in a non-linear fashion. So, the midi analysis part of the program basically looks for the best "match" in the sample-set to recreate what's being input via midi.
I actually freaked out a bit when I saw the details because it's eerily close to a proposal I "pitched" to a few companies a couple of years ago -- right down to using additive + noise for the resynthesis, and using comlete recordings of concert works for the sample-set (as opposed to individual notes). Pretty wild. But then, the developer has been working on it for eight years, so I only managed to be six years behind the times! [;)]
Back to your question. As Christian (1) pointed out, things are certainly moving in the direction of some sort of hybrid. It's just a question of what that hybrid will be, and how it will manage to bridge the gap between the sample and its resynthesis. Really, the only form of "ground up" synthesis that stands a chance is physical modeling, but until someone comes up with a feasible control structure, it will not likely find a place in the "orchestral" market. A sort of middle-ground can be found in the program Melodyne, by Celemony software. This program can perform some real miracles, in that it can separate out pitch, time, and formant -- it can even flatten out vibrato! If you mess around with Melodyne for a bit, you'll instantly recognize the potential for composers like ourselves. The key bit of "magic" lies in the ability to provide subtle variation to a given sample's pitch, vibrato, duration, and amplitude dynamically. This is what will really bring a sample library to life (okay, there's plenty of "life" in VSL, but you know what I mean!) There is little doubt in my mind that this is the direction in which VSL is moving. Remember this little statement?:
"Revolutionary software developments in the areas of music production and education will significantly affect the work of music creators. In the near future, quick access to samples will no longer be a question of RAM or hard-disk streaming, but will rely on a completely intuitive connection between the composer's sonic vision and the sounds required to achieve it."
Notice particularly the part about RAM and hard-disk streaming... These factors can only truly be transcended using some sort of sample resynthsis (e.g., the software that inspired this thread packs an entire orchestra into 40 MB). Again, it's just a matter of which technique is chosen, and how it's implemented.
J.