@cm said:
another issue is the needed amount of preloaded samples. this is basically only neccessary to fill the various bufffers (soundcard, audio application, operating system, harddisk I/O) and *hide* the various latencies.
128 kB (=32.768 samples 16 bit stereo) is a common value for most samling engines - ViennaInstruments needs far less (and is 24 bit!).
256 Byte (= 64 samples 16bit stereo) is a common value for soundcards - many users need to use a higher value because their system cannot deliver such a data stream continuosly.
any efforts in this direction (reducing amount of preloaded data) are useless IMO and stay theoretical unless we get storage media with an almost-zero seektime like compact-flash or solid state disks with enough capacity for a reasonable price.
christian
Just reading this again... not sure if I mis-read your post, but you seem to be indicating that you would still need a pre-load of the original size?
That isn't so.
The approach I am suggesting would reduce the preload buffer size by the factor of 'n' (which is where the memory savings occur). The buffer would be a 'virtual' buffer in that it would only hold some of the data and synthesise the missing data that lies between the data points. Synthesise meaning that it would either repeat the same sample point 'n' times or interpolate from the points either side.
I am presuming that the pre-load buffer is entirely controlled inside the C++ VI application to make this possible?
Just in case this wasn't clear...
I'm still looking into trying this out... I want to write a Java app that reads a 44100 .wav file and creates a new .wav file using this approach. I want to produce .wav files at different freq and using different 'draft' settings and see what they sound like.
The proof of the pudding - in this case - is in the hearing...
Stay tuned... (if you've not already tuned out)