Vienna Symphonic Library Forum
Forum Statistics

182,402 users have contributed to 42,225 threads and 254,783 posts.

In the past 24 hours, we have 4 new thread(s), 16 new post(s) and 45 new user(s).

  • Vagn, please take the time to re-read the actual claim I came out against.

    A few posts earlier Gus said:

    "...First: you are NOT doing "massive random reads of samples" when you're streaming these libraries. The way data was written to disk when you installed them was sequentially written. Have you ever looked into one of the VSL folders on your drive and counted the number of files that make up the library? Very few, very large in size. Thus, benchmarking the drive's sequential, not random, read speed is the best way to try to estimate the expected drive performance on a sample streaming application."


    Sooo.. come on! Very few, very large? What it has to do with anything.

  • I'll agree with you that the files in the VSL folders tells us nothing about how the files are actually read, as Gus seemed to imply. These large files are merely containers for the samples compressed with VSL's own compression routine. But nevertheless he's correct in saying that random read does not govern the performance of sample streaming.

    Few (music)people seem to understand that sample streaming performance is largely decided by the drive's ability to maintain high sequential read speeds at high que depths, which is what people confuse with random reads. Just looking at a drive's sequential read speed tells you absolutely nothing about how the drive wil fare once you start hitting it with multiple streams, so knowing how the speed of the drive drops at increasing que depths is the essential thing to look out for. Some of the fastest drives on the market completely collapse at que depths of 32 and more, to the point of having worse throughput of even some of the faster mechanical drives(!), though the crappy seek performance of mechanical drives would make them crackle and pop long before this would become a problem in a SSD-based usage scenario.


  • Oh and one last thing: saying that larger SSDs have better performance than smaller ones is a stretch. It's entirely down to the design of the drives. Going from a 128Gb drive of same make to 256GB one might yield double the performance in some categories, while going from 256GB to 512GB could show equal or even less speed. Some manufacturers might even use NAND of differing fabrication size within the same model range, making for less transparency in what performance one is to expect from a drive.

    These things make it impossible to make any wholesale arguments like "bigger is always faster".


  • Hi guys, It seems you're focusing on the physical characteristics of the SSD, but neglecting the memory management of the vienna player. Perhaps large memory sizes will be used efficiently by the player to maintain frequently used samples without needing to access the SSD nearly as frequently. Increased ram, say from 32 to 64 gigs will hopefully be made use of by the Vienna player. Bill

  • It might, Bill. But the whole shebang of the SSD is about lightning fast seeking and responding, and thats what being promoted in VI Pro features (lookup the feature list). It IS promoted as SSD ready to do exactly the opposite of what you say - to decrease the buffer, and save some RAM. And why is that possible? Because of the SSD. Because it does matter.

  • [quote=Gusfmm]ast point, while your symphonic composition may be playing back say 80 simultaneous notes with 2 or 3 layers each (a stretch) at a given point in time, that would be, in your mind, that 240 "random" samples read from disk. First, they are not being streamed instantaneously from disk, there is a prebuffer likely handling a good part of that load; and secondly, if they had to be read instantaneously, my point was that you would have "simultaneous" 240 I/O calls to disk, and the determinant factor for performance would be the sequential read speed, not the random read speed, as each of these samples would most likely be chunks of 200KB, 1MB, 2MB, whatever, that are read on the SSD as sequential "pages" of 4KB each. So take a single 200KB sample. That'd be 50 sequential (likely, ideally) reads, not random reads. The sample is very unlikely going to be highly fragmented. In summary, you are comparing

    When using VIPro with smaller buffers (2048), you can easily end up with prebuffer sizes of 8kb. This way, the most important characteristic to look for in a drive would be the "8k random read" figure. Normally, reviewers run 4k random read tests - which is the number I generally look at for sample streaming performance with SSDs. 


  •  Martin,

    Let's take your example of a prebuffer size of 2048. So what you're saying is that because the prebuffer per-sample allocation is 8KB (256 samples), the most important characteristic is random read speed. I'd like to understand the rationale behind that. Let's go step by step if you don't mind,

    Could you clarify what 'typically' happens when I press three keys on the piano and trigger three 'voices' on a single instance of VIP. My impression is that VIP plays back the 3 x 8KB buffered sample starts while simultaneously proceeds to load and continue streaming the rest of these three samples from disk, is this correct? If it is, what would a typical note sample size be? Let's say a regular legato note?