Vienna Symphonic Library Forum
Forum Statistics

183,771 users have contributed to 42,321 threads and 255,162 posts.

In the past 24 hours, we have 6 new thread(s), 15 new post(s) and 39 new user(s).

  • mathis, i haven't tested the fastrack family, because this type of raid controllers is not very *intelligent* and there are lots of different types of chips and firmware used - you need to check every single piece on it's capabilities. the onboard type is unusable, at least that has been my experience.
    christian

    and remember: only a CRAY can run an endless loop in just three seconds.
  • It would be helpful to know in which way you experienced these onboard controllers as unusable.
    I experience some odds with my Promise and would like to know if this is similar to what you experienced.

  • copy a really big file from a volume on the controller to another disk to calculate the MB/s - this should be 20 MB/s or more. then try to stream a lot of voices from the same volume (1 stereo 44.1 16 bit = 176.4 KB/s) to find out how much throughput the controller allows for streaming. the first time i tried this i've been very astonished to find out it was 1 MB/s or even less. this seems to apply to all applications loading dynamically only parts of files directly from a disk instead from a cache.
    christian

    and remember: only a CRAY can run an endless loop in just three seconds.
  • I use dskbench for bechmark test and came to interesting insights:

    This is the result of my Promise stripe-0 raid consisting of two 120GB Hitachi 7.200rpm disks:

    W:\\>dskbench
    DskBench 2.12
    (c) 1998, SESA, J.M.Catena (cat@sesa.es, www.sesa.es)
    Timer Check = 1001 (should be near 1000)
    CPU Check = 50.22 % (should be near 50.00 %)
    CPU index (relative to Pro 200 MHz) = 5.851867
    Open = 0 ms
    Write = 3661 ms, 69.93 MB/s, CPU = 8.21 %
    Flush = 0 ms
    Rewin = 0 ms
    Read = 2881 ms, 88.86 MB/s, CPU = 15.84 %
    Close = 0 ms
    BlockSize = 131072, MB/s = 14.29, Tracks = 169.84, CPU = 2.53 %
    BlockSize = 65536, MB/s = 10.22, Tracks = 121.45, CPU = 1.94 %
    BlockSize = 32768, MB/s = 6.91, Tracks = 82.15, CPU = 2.23 %
    BlockSize = 16384, MB/s = 4.40, Tracks = 52.25, CPU = 2.09 %
    BlockSize = 8192, MB/s = 3.16, Tracks = 37.62, CPU = 3.35 %
    BlockSize = 4096, MB/s = 3.70, Tracks = 43.97, CPU = 6.36 %


    I donĀ“t fully understand the thing with the blocksizes in the end but wjat striked me was the really high CPU usage it needs for transferring these impressive amounts of data.

    Compare this Maxtor 5.400 80GB:

    E:\\>dskbench
    DskBench 2.12
    (c) 1998, SESA, J.M.Catena (cat@sesa.es, www.sesa.es)
    Timer Check = 991 (should be near 1000)
    CPU Check = 50.01 % (should be near 50.00 %)
    CPU index (relative to Pro 200 MHz) = 5.921833
    Open = 31 ms
    Write = 13369 ms, 19.15 MB/s, CPU = 1.36 %
    Flush = 36 ms
    Rewin = 0 ms
    Read = 9941 ms, 25.75 MB/s, CPU = 1.14 %
    Close = 0 ms
    BlockSize = 131072, MB/s = 11.00, Tracks = 130.78, CPU = 1.03 %
    BlockSize = 65536, MB/s = 5.83, Tracks = 69.27, CPU = 1.18 %
    BlockSize = 32768, MB/s = 3.01, Tracks = 35.75, CPU = 0.55 %
    BlockSize = 16384, MB/s = 1.51, Tracks = 17.93, CPU = 0.13 %
    BlockSize = 8192, MB/s = 1.50, Tracks = 17.80, CPU = 0.57 %
    BlockSize = 4096, MB/s = 1.00, Tracks = 11.92, CPU = 1.65 %


    I think thatĀ“s why these cheap raids are not recommended I guess. And this could be the reason for the strange CPU peaks I notice while heavy disk activity on the raid. ThatĀ“s why I canĀ“t use it for streaming samples.

    In my DAW Samplitude I can "only" play back about 60 tracks of stereo 16/44k1. That sums up to something like 60 * 176,4KB = 10584KB / 1024 = 10,3 MB/s. I think thatĀ“s too less for a raid like that.

    I guess I gonna kick it out and use the disks in plain normal mode...

    Sorry for maybe getting offtopic.
    Bests,
    - M

  • that's why i hate benchmarks - they don't even state the used filesize and if read- and/or write- caches are aktive. 64 KB (65536) is the default blocksize for almost all raid controllers, and you see the lowest CPU usage with this value, but 10,22 MB/s is depressing for a raid 0.
    more honest for benchmarks is EZSCSI from adaptec (works also for ATA drives) - you could adjust filesize and random/sustained read/write
    christian

    and remember: only a CRAY can run an endless loop in just three seconds.
  • To complete my story: I have built in a new Hitachi 120GB 7.200rpm 8mbcache harddisk (so the same as my raid-disks) and the test brings:

    G:\\>dskbench
    DskBench 2.12
    (c) 1998, SESA, J.M.Catena (cat@sesa.es, www.sesa.es)
    Timer Check = 1002 (should be near 1000)
    CPU Check = 50.15 % (should be near 50.00 %)
    CPU index (relative to Pro 200 MHz) = 6.046698
    Open = 0 ms
    Write = 8379 ms, 30.55 MB/s, CPU = 1.25 %
    Flush = 26 ms
    Rewin = 0 ms
    Read = 8402 ms, 30.47 MB/s, CPU = 1.32 %
    Close = 0 ms
    BlockSize = 131072, MB/s = 11.68, Tracks = 138.87, CPU = 0.68 %
    BlockSize = 65536, MB/s = 11.64, Tracks = 138.42, CPU = 1.30 %
    BlockSize = 32768, MB/s = 7.46, Tracks = 88.73, CPU = 1.60 %
    BlockSize = 16384, MB/s = 5.00, Tracks = 59.49, CPU = 2.19 %
    BlockSize = 8192, MB/s = 3.77, Tracks = 44.85, CPU = 3.03 %
    BlockSize = 4096, MB/s = 3.01, Tracks = 35.78, CPU = 4.73 %



    My trackcount inside Samplitude gives some few tracks less than with the raid, about 54 stereos 16/44,1.

    So although I canĀ“t recommend a raid, especially for streaming, itĀ“s not slower than a single harddisk... [8-)]
    Actually thereĀ“s hardly any benefit, but the great disadvantage of double harddisk crash risk.

  • so what do we learn after comparing this numbers? 64KB _is_ the default block size, cheap raid controllers don't help with streaming, ibm (=hitachi) disks are mediocre, certain benchmarks are not stating anything about read/write caches, 8 MB caches are more hinderly than helpfull, ect

    please consider each application which depends on streaming only calls 90 - 180 KB _raw_ data from a wave- (or gig-) file during a single access - this number depends on the settings and capabilities - so any part in the chain within the flow of data which does not allow to grab through the various caches will finally decrease performance

    christian

    and remember: only a CRAY can run an endless loop in just three seconds.
  • last edited
    last edited

    @Another User said:

    ibm (=hitachi) disks are mediocre,

    Oh shit, thatĀ“s the conclusion? I chose them because of their low seek time according to their data sheets...

  • last edited
    last edited

    @Crystal said:

    1) Donā€™t use Giga on your sequencer PC if possible.


    It is true that you want mutliple PC for VSL but, on the other hand, I used to run GS on the same machine as my sequencers with good results. I did run into performance problems under heavy loaded situations but you can still get a lot of work out of that sequencing PC since sequencing is only a moderate load on a PC. The extra performance is there if you want to use it.

  • mathis, that fits pretty good to the numbers you posted. 256 MB need 8.3 s, thats ~30 MB/s (using acceleration caused by the cache on the disk) - a sustained datarate of 11.6 MB/s would not make me very happy and calculating your voicecount i read an effective throughput of 9 MB/s.
    reading about an average seektime of 8.5 ms this sound *normal* ...

    the block-size (for data access) is not the same as the sector-size (used formatting a disk). i'd recommend to format harddisks used for audio with 4096 B/sector - this gives you less overhead than the default 512 but would still allow to use the built in defragmentation of W2K/XP (which does not work for higher values for sector size)

    christian

    and remember: only a CRAY can run an endless loop in just three seconds.
  • last edited
    last edited

    @evanevans said:

    How inflexible. And all that hardware. geesh. Too bad you aren't into Apple computers.
    Evan Evans

    Hey Evan,

    Are you running the full Pro Edition successfully on your Mac?

    JHanks.