@cm said:
hmmm, and how shall the software guess what you want to do at a certain point? this is also related to the way streaming is defined!
to stay with your supermarket example: if you drive with your basket through the rows, how can the basket or sales-person know what's missing in your fridge?
christian
Your analogy is wrong: knowing what is in your fridge is not part of the service a supermarket offers. Managing samples efficiently is most definitely a part of what your service should offer. If you are going to make an analogy, then the samples are the equivilent of the supermarket stock... supermarkets reload/unload the shelves based on customer demand. If something is popular, then it is restocked, if not, then it is never sold again. This is a better analogy. An algorithm that sits behind the scenes, that you never see that both serves to make the customer happier and the supermarket satisfy the customer better.
So... how to do it?
It is easy... and a standard technique used in IT... it is a cache that unloads the least recently used item. It works like this:
1) Associate a 'last time played' time with each sample.
2) Everytime a sample is played, e.g. receives a midi note, then update the time on the sample. This indicates that this sample is in use.
3) Only if memory starts to run out, then unload the least recently used samples... or hopefully the samples that have not been used at all in this case. But do this either a) During playback, or b) whilst the user is not busy... e.g. as a background task whilst they are away from their PC.
4) When a sample receives a midi event and is not loaded into memory, then load that sample into memory there and then, with obviously a small downside on latency.
5) Provide a default switch to default all samples to be unloaded from memory when first you load a patch thus all samples will be lazily loaded into memory the first time they are played, thus no unnecessary samples will ever be loaded into memory.
6) Provide a further switch to control the amount of pre-load so that pre-load can be set to 0.0 seconds thus allowing a virtually unlimitlness number of VSL instruments on a single machine.
The above has the following disadvantages:
Small latency issues if/when samples are loaded/unloaded on the fly.
The above has the following advantages:
Those composers with huge amounts of memory will never be hit by the downsides of above as will never run out of memory therefore never need to perform any of these actions.
Those of us who don't care about real-time playing can configure VSL to never load any samples into memory and thus be able to run the entire VSL orchestra on a laptop given enough disk space. With the downside of extra latency. But who cares about that given that we are not talking real-time? I could live with 2 second latency if it meant limitless capacity and a single orchestral template.
Those of us of have limited amounts of memory but still need real-time playing will only be affected by the small latency issues involved above if they start to run out of memory. This simply tells them that they are doing too much work for one machine.
This type of cache and lazy loading/unloading is a standard IT technique that I use all the time in my day job designing trading systems in London investment banks... so I know it is easily possible.