@julian said:
However I'm still interested in the 10-1 VSL compression claim - no one else, to my knowledge, has got beyond 2-1 without affecting data integrity i.e. not lossless.
I suspect that the high compression ratio is due to the (highly repetitive) nature of the data being compressed.
Consider how (very) low quality sample libraries work (or the early ones) - a library might just have one sample for a pitch sampled at velocity 50. To play the same note at velocity 100 it would just double the amplitude of that sample, which although far from perfect is a reasonable approximation. So for compression purposes, if you do have a sample for velocity 100 what you could store rather than that sample itself is the difference between the velocity 100 sample and the velocity 50 sample with the amplitude doubled.
Now whilst that may not achieve a 10-1 compression, consider the difference between a velocity 51 sample and the velocity 50 sample (with an appropriate increase in amplitude) - the differences here would be pretty small (possibly larger than 10-1).
Similar compression can be achieved for pitch - instead of storing the entire c2 sample, store the difference between the c2 sample and the c1 sample with the speed doubled.
Now, I don't know if the VSL compression techniques are based on the above - but the above reasoning is enough to persuade me that it is plausible that there are characteristics in the sort of data needed in a sample library which can be taken advantage of to achieve higher compression ratios than are normally possible in generic more data sets.
Matthew