ok, i was understanding the term *sample* in the sense it is used in the VSL product description (violin staccato has xy samples, ect).
if we look at it in a digital manner we have to clear definitions first:
- the headers of a PCM/RIFF file are using a 32bit data format (this is the reason why a RIFF file is limited to 4 GB maximum, usually even 2 GB because of the difference between signed and unsigned integer)
- the sampling bit-depth can be 8/16/24/32 bit (VSL is 16 bit currently, of course theoretically it can be lower than 8 bit but i'm sure we wouldn't like that, additionally 8 bit is stored as unsigned integer, wheras 16 bit as signed integer, 32bit would be floating point)
- the sampling rate can be something (VSL is using 44100 Hz currently - /2 = 22050, /4=11025, /8=5512,5 ... oops, here we would leave the integer and we had to start rounding because i can't display a half sample)
- PCM data ist stored interleaved in frames and each frame consists of sequential data for the used channels (2 in case of VSL because of stereo)
i'm basically referring to this description
- the Nyquist-Shannon-Theorem says that unambiguous reconstruction is possible if the signal is bandlimited and the sampling frequency is greater than twice the signal bandwidth. that's why telephone lines don't sound too good and i think a sampling frequency of 11050 would leave us with an unacceptable *quality*.
one could possibly now *invent* (=interpolate) the missing samples to stay with 44100 or repeat the *reference sample* but i've never tried that and probably i wouldn't like to hear the result. do you have any examples for that to compare the quality using a human ear?
based on this - where would you cut the amount of information into half (and further /4, /8, ect) without loosing an acceptable sound?
another issue is the needed amount of preloaded samples. this is basically only neccessary to fill the various bufffers (soundcard, audio application, operating system, harddisk I/O) and *hide* the various latencies.
128 kB (=32.768 samples 16 bit stereo) is a common value for most samling engines - ViennaInstruments needs far less (and is 24 bit!).
256 Byte (= 64 samples 16bit stereo) is a common value for soundcards - many users need to use a higher value because their system cannot deliver such a data stream continuosly.
any efforts in this direction (reducing amount of preloaded data) are useless IMO and stay theoretical unless we get storage media with an almost-zero seektime like compact-flash or solid state disks with enough capacity for a reasonable price.
christian
if we look at it in a digital manner we have to clear definitions first:
- the headers of a PCM/RIFF file are using a 32bit data format (this is the reason why a RIFF file is limited to 4 GB maximum, usually even 2 GB because of the difference between signed and unsigned integer)
- the sampling bit-depth can be 8/16/24/32 bit (VSL is 16 bit currently, of course theoretically it can be lower than 8 bit but i'm sure we wouldn't like that, additionally 8 bit is stored as unsigned integer, wheras 16 bit as signed integer, 32bit would be floating point)
- the sampling rate can be something (VSL is using 44100 Hz currently - /2 = 22050, /4=11025, /8=5512,5 ... oops, here we would leave the integer and we had to start rounding because i can't display a half sample)
- PCM data ist stored interleaved in frames and each frame consists of sequential data for the used channels (2 in case of VSL because of stereo)
i'm basically referring to this description
- the Nyquist-Shannon-Theorem says that unambiguous reconstruction is possible if the signal is bandlimited and the sampling frequency is greater than twice the signal bandwidth. that's why telephone lines don't sound too good and i think a sampling frequency of 11050 would leave us with an unacceptable *quality*.
one could possibly now *invent* (=interpolate) the missing samples to stay with 44100 or repeat the *reference sample* but i've never tried that and probably i wouldn't like to hear the result. do you have any examples for that to compare the quality using a human ear?
based on this - where would you cut the amount of information into half (and further /4, /8, ect) without loosing an acceptable sound?
another issue is the needed amount of preloaded samples. this is basically only neccessary to fill the various bufffers (soundcard, audio application, operating system, harddisk I/O) and *hide* the various latencies.
128 kB (=32.768 samples 16 bit stereo) is a common value for most samling engines - ViennaInstruments needs far less (and is 24 bit!).
256 Byte (= 64 samples 16bit stereo) is a common value for soundcards - many users need to use a higher value because their system cannot deliver such a data stream continuosly.
any efforts in this direction (reducing amount of preloaded data) are useless IMO and stay theoretical unless we get storage media with an almost-zero seektime like compact-flash or solid state disks with enough capacity for a reasonable price.
christian
and remember: only a CRAY can run an endless loop in just three seconds.