@Vlzmusic said:
Gusfmm You have to make order in those calculations.Typical full Vsl instrument - 5-7 GB on disk. You play G4 sustain, then legato jump to C4 - there is physical distance between those two - you are not playing back something sequential, as in file copy or movie watching.
Now multiply this for 20-30 instruments you might use in a track (150-200 gb of data spread on the disk) - if you use layered patches, multiply each voice by 2-4 as well.
-----EDIT------ Now I re-read your last post and got your point. You mean each note within itself is not fragmented. Ok - it might not be, if you defragment (those large files can be fragmented as well). But when you play - notes are triggered, and that`s called random in my book, even if each note isn`t fragmented. I am glad if someone`s Caviar black does the job, but prefer SSD solution, specially once the dimension series is now about full orchestra individualization - that`s about 50+ instruments.
Have a great weekend.😊
No disrespect intended, but you've got to understand how a SSD drive writes and reads data to be able to argue this. Let me be very brief and simplistic, as this is getting way off topic.
SSD's should not be defragmented, there is no need to. On the contrary, the nature of the operation of a mechanical drive writes data in a fragmented way, thus benefits from defragmenting. That covers part of your comment.
On the other part, a sampler player such as VIP does not handle these samples as individual notes being randomly loaded from disk. I understand you think your "random" playing may translate into random disk reads, but that is not the case. Keep in mind there is a pre-buffer that is allocated and managed dynamically... too long to explain, but I'm sure this is not the first time I've seen a similar conversation on the topic in similar forums.
Last point, while your symphonic composition may be playing back say 80 simultaneous notes with 2 or 3 layers each (a stretch) at a given point in time, that would be, in your mind, that 240 "random" samples read from disk. First, they are not being streamed instantaneously from disk, there is a prebuffer likely handling a good part of that load; and secondly, if they had to be read instantaneously, my point was that you would have "simultaneous" 240 I/O calls to disk, and the determinant factor for performance would be the sequential read speed, not the random read speed, as each of these samples would most likely be chunks of 200KB, 1MB, 2MB, whatever, that are read on the SSD as sequential "pages" of 4KB each. So take a single 200KB sample. That'd be 50 sequential (likely, ideally) reads, not random reads. The sample is very unlikely going to be highly fragmented. In summary, you are comparing 240 "random" locations on disk, vs. a total of over 10,000 sequential page reads. Hope this really illustrates the concept for you, in an orderly fashion. It was always that way.
Nobody is arguing the superior performance of a SSD.