By sheer effort and torture I've been developing a program for sample selection (yes, some may remember me saying this as long as a year ago... still working...). It takes a 1-second (actually, user definable) delay to analyze and buffer samples. The program basically works, but it's buggy. It does however, use very little memory, and instruments load almost immediately. And, it uses pure VSL, with no messing about!
Beyond that, I had a thought about Synful the other day... I think that, because it uses a highly "fragmented" sample source, perhaps part of the "synthy" sound of it has more to do with the resonances of the instrument's body being chopped up... After all, hasn't just about everyone who's tried it noticed that it sounds WAAAAY better with even the smallest bit of reverb? That is, it sounds different in a very essential way, with reverb -- VSL sounds better, but not entirely different...
So, I was thinking that what the developer should do is incorporate resonance models (IR-based) into the program itself, selecting the resonance model with the instrument. Just a thought... in case he's reading!
Otherwise, for the end-user, it might make sense to try creating a violin-body IR (in Altiverb, or whatever) at home, then applying that to Synful first, at 100% "wet".
This could make Synful sound much better, even for "dry" playback (yeah, pseudo-dry, but you know what I mean...)
J.