your observation is presumably related to:
As an application uses up space, the virtual memory system allocates additional swap file space on the root file system. and
When the number of pages on the free list falls below a threshold (determined by the size of physical memory), the pager attempts to balance the queues.
this means depending on the number of applications running, the available physical memory, the memory allocated per aplication and in total causes the pager to act or not beyond a *certain* (which?) threshold - so actually the same system might behave different in a different situation.
an idea which comes into my mind would be to *touch* every loaded sample (in analogy to the giga load time optimizer) to tell the system the file *is in use* and therefore needed to be kept in resident memory ... but if this would be even possible is pure speculation and would for sure increase loading times ...
however the issue is known and if our programmers find a straight forward method to overcome this inconvenience we will see it implemented.
related to the same background is the question: why can the application not calculate how much memory in fact will be available before loading a new patch if actually the needed memory is known.
not too satisfactory and tightened by the footnote These measurements will change with each new Mac OS X release. They are provided here to give you a rough estimate of the relative cost of system resource usage.
i smell this will give us more trouble and headache as the amount of installed memory increases further, also on windows 64bit systems ....
christian