Vienna Symphonic Library Forum
Forum Statistics

182,766 users have contributed to 42,257 threads and 254,914 posts.

In the past 24 hours, we have 3 new thread(s), 20 new post(s) and 45 new user(s).

  • Dennis,

    We have a series of free Street Smart Guides that answer your questions:
    http://www.truespec.com/downloads/index.cfm

    In general with the Pro Edition, I would plan for two systems and then wait for the release of Giga 3.0 since the increased polyphony will make two today work like four this spring.

    Normally, we put strings, percussion and harp on one system and woodwinds and brass on the other.

    Peter Alexander
    www.truespec.com
    310-559-3779

  • last edited
    last edited

    @Nick Batzdorf said:

    Are there any 10K RPM SATA drives other than the Western Digital Raptors? The problem with them, of course, is that the largest ones are "only" 72GB.


    No, these are the only ones. And as I've said elsewhere, we need to look beyond the statement of "unlimited polyphony" since the Giga brochure clearly stated that with a 2.8GHz system, they slightly doubled the existing polyphony. And this was using two IDE drives in a RAID.

    Raptor drives are within a few dollars of their SCSI cousins. The 74GB drive, my cost as a dealer, is roughly $250. Quite a few motherboards come with a SATA RAID, but this is only for TWO drives. So with Vienna, you either have to justify spending $500+ to warehouse just under 150GB of data or spend $340 thereabouts to wherehouse just under 320GB using two 160GB drives. But if you want comparable to the 320GB with Raptors, then the cost is $1000 for hard drives only plus approximately $100 for the RAID card, which you'll set for 0 to have the warehousing space needed for Vienna.

    At the risk of offending Tascam, I have to urge caution.

    Unlimited polyphony is not really true.

    The software in absence of hardware is capable of unlimited polyphony (just like how programs used to advertise themselves as being able to record unlimited audio tracks). But this is really a misnomer since software must work within a hardware environment.

    The real questions are:

    1. With Giga 3.0 and the current hardware available using Windows XP 32-bit, what is the polyphony possibility?

    2. And at what cost so that we know the financial range of practicality?

    It's great to talk about Raptors, but lets not forget the 15000RPM SCSI drives available. A 73GB 15000RPM Cheetah drive costs $530 (my cost). For 146GB in either a RAID or with just a D and E drive, we're talking $1060 for two drives and at how much polyphony for the cost? Plus at 240GB for just the Pro Edition, you need bigger drives.

    I think that for now, customers are advised to get systems with the existing SATA 160GB drives in a RAID and then WAIT to see what really transpires on release.

    You can always change drives later once the real performance specs have been tested and published.

    Peter Alexander
    www.truespec.com
    310-559-3779

  • last edited
    last edited

    @Nick Batzdorf said:

    My understanding is that disks of all sizes cost the same to make. The difference is that the yield is lower for larger ones. (Which of course negates my first sentence, but the point remains!)


    Not exactly. Higher capacities demand higher precision these times. 3.5" are 3.5" since a few years, the things that cahnged are the magnetic tracks coming together more and more. It's indeed easier and less expensive to produce with less precision. The speed increases with the packing, but also is lost on correction of read errors. Apart from that, heat problems that may arise with increasing the spinning speed could also be part of the issue.

    Peter, I don't think you will get problems with Tascam. Also with 2.5 the 160 voices couldn't be reached on every system. I myself still think it's not worth the cost of the faster drives, 2 or 3 7200rpm drives should give enough polyphony, as long as there is a limit to the maximum of 2 GB RAM for precaching the files. Still I'd like to see some experiments testing the preformance when prebuffering is set lower than it is right now in 2.5 (I think 64kb per stereo sample) - but I guess therefore HD seektimes significantly lower than 9ms are needed.

    PolarBear

  • last edited
    last edited

    @cm said:

    one thing that doesn't work at all for streaming is to use the type of onboard raid controllers from promise (and others) for creating a raid,


    Christian, does that also count for the added pci ide raid controllers by Promise? For example the Fasttrack TX2000? Or do you only mean the raid controllers built onto the motherboard?

    Bests,
    - M

  • mathis, i haven't tested the fastrack family, because this type of raid controllers is not very *intelligent* and there are lots of different types of chips and firmware used - you need to check every single piece on it's capabilities. the onboard type is unusable, at least that has been my experience.
    christian

    and remember: only a CRAY can run an endless loop in just three seconds.
  • It would be helpful to know in which way you experienced these onboard controllers as unusable.
    I experience some odds with my Promise and would like to know if this is similar to what you experienced.

  • copy a really big file from a volume on the controller to another disk to calculate the MB/s - this should be 20 MB/s or more. then try to stream a lot of voices from the same volume (1 stereo 44.1 16 bit = 176.4 KB/s) to find out how much throughput the controller allows for streaming. the first time i tried this i've been very astonished to find out it was 1 MB/s or even less. this seems to apply to all applications loading dynamically only parts of files directly from a disk instead from a cache.
    christian

    and remember: only a CRAY can run an endless loop in just three seconds.
  • I use dskbench for bechmark test and came to interesting insights:

    This is the result of my Promise stripe-0 raid consisting of two 120GB Hitachi 7.200rpm disks:

    W:\\>dskbench
    DskBench 2.12
    (c) 1998, SESA, J.M.Catena (cat@sesa.es, www.sesa.es)
    Timer Check = 1001 (should be near 1000)
    CPU Check = 50.22 % (should be near 50.00 %)
    CPU index (relative to Pro 200 MHz) = 5.851867
    Open = 0 ms
    Write = 3661 ms, 69.93 MB/s, CPU = 8.21 %
    Flush = 0 ms
    Rewin = 0 ms
    Read = 2881 ms, 88.86 MB/s, CPU = 15.84 %
    Close = 0 ms
    BlockSize = 131072, MB/s = 14.29, Tracks = 169.84, CPU = 2.53 %
    BlockSize = 65536, MB/s = 10.22, Tracks = 121.45, CPU = 1.94 %
    BlockSize = 32768, MB/s = 6.91, Tracks = 82.15, CPU = 2.23 %
    BlockSize = 16384, MB/s = 4.40, Tracks = 52.25, CPU = 2.09 %
    BlockSize = 8192, MB/s = 3.16, Tracks = 37.62, CPU = 3.35 %
    BlockSize = 4096, MB/s = 3.70, Tracks = 43.97, CPU = 6.36 %


    I donĀ“t fully understand the thing with the blocksizes in the end but wjat striked me was the really high CPU usage it needs for transferring these impressive amounts of data.

    Compare this Maxtor 5.400 80GB:

    E:\\>dskbench
    DskBench 2.12
    (c) 1998, SESA, J.M.Catena (cat@sesa.es, www.sesa.es)
    Timer Check = 991 (should be near 1000)
    CPU Check = 50.01 % (should be near 50.00 %)
    CPU index (relative to Pro 200 MHz) = 5.921833
    Open = 31 ms
    Write = 13369 ms, 19.15 MB/s, CPU = 1.36 %
    Flush = 36 ms
    Rewin = 0 ms
    Read = 9941 ms, 25.75 MB/s, CPU = 1.14 %
    Close = 0 ms
    BlockSize = 131072, MB/s = 11.00, Tracks = 130.78, CPU = 1.03 %
    BlockSize = 65536, MB/s = 5.83, Tracks = 69.27, CPU = 1.18 %
    BlockSize = 32768, MB/s = 3.01, Tracks = 35.75, CPU = 0.55 %
    BlockSize = 16384, MB/s = 1.51, Tracks = 17.93, CPU = 0.13 %
    BlockSize = 8192, MB/s = 1.50, Tracks = 17.80, CPU = 0.57 %
    BlockSize = 4096, MB/s = 1.00, Tracks = 11.92, CPU = 1.65 %


    I think thatĀ“s why these cheap raids are not recommended I guess. And this could be the reason for the strange CPU peaks I notice while heavy disk activity on the raid. ThatĀ“s why I canĀ“t use it for streaming samples.

    In my DAW Samplitude I can "only" play back about 60 tracks of stereo 16/44k1. That sums up to something like 60 * 176,4KB = 10584KB / 1024 = 10,3 MB/s. I think thatĀ“s too less for a raid like that.

    I guess I gonna kick it out and use the disks in plain normal mode...

    Sorry for maybe getting offtopic.
    Bests,
    - M

  • that's why i hate benchmarks - they don't even state the used filesize and if read- and/or write- caches are aktive. 64 KB (65536) is the default blocksize for almost all raid controllers, and you see the lowest CPU usage with this value, but 10,22 MB/s is depressing for a raid 0.
    more honest for benchmarks is EZSCSI from adaptec (works also for ATA drives) - you could adjust filesize and random/sustained read/write
    christian

    and remember: only a CRAY can run an endless loop in just three seconds.
  • To complete my story: I have built in a new Hitachi 120GB 7.200rpm 8mbcache harddisk (so the same as my raid-disks) and the test brings:

    G:\\>dskbench
    DskBench 2.12
    (c) 1998, SESA, J.M.Catena (cat@sesa.es, www.sesa.es)
    Timer Check = 1002 (should be near 1000)
    CPU Check = 50.15 % (should be near 50.00 %)
    CPU index (relative to Pro 200 MHz) = 6.046698
    Open = 0 ms
    Write = 8379 ms, 30.55 MB/s, CPU = 1.25 %
    Flush = 26 ms
    Rewin = 0 ms
    Read = 8402 ms, 30.47 MB/s, CPU = 1.32 %
    Close = 0 ms
    BlockSize = 131072, MB/s = 11.68, Tracks = 138.87, CPU = 0.68 %
    BlockSize = 65536, MB/s = 11.64, Tracks = 138.42, CPU = 1.30 %
    BlockSize = 32768, MB/s = 7.46, Tracks = 88.73, CPU = 1.60 %
    BlockSize = 16384, MB/s = 5.00, Tracks = 59.49, CPU = 2.19 %
    BlockSize = 8192, MB/s = 3.77, Tracks = 44.85, CPU = 3.03 %
    BlockSize = 4096, MB/s = 3.01, Tracks = 35.78, CPU = 4.73 %



    My trackcount inside Samplitude gives some few tracks less than with the raid, about 54 stereos 16/44,1.

    So although I canĀ“t recommend a raid, especially for streaming, itĀ“s not slower than a single harddisk... [8-)]
    Actually thereĀ“s hardly any benefit, but the great disadvantage of double harddisk crash risk.

  • so what do we learn after comparing this numbers? 64KB _is_ the default block size, cheap raid controllers don't help with streaming, ibm (=hitachi) disks are mediocre, certain benchmarks are not stating anything about read/write caches, 8 MB caches are more hinderly than helpfull, ect

    please consider each application which depends on streaming only calls 90 - 180 KB _raw_ data from a wave- (or gig-) file during a single access - this number depends on the settings and capabilities - so any part in the chain within the flow of data which does not allow to grab through the various caches will finally decrease performance

    christian

    and remember: only a CRAY can run an endless loop in just three seconds.
  • last edited
    last edited

    @Another User said:

    ibm (=hitachi) disks are mediocre,

    Oh shit, thatĀ“s the conclusion? I chose them because of their low seek time according to their data sheets...

  • last edited
    last edited

    @Crystal said:

    1) Donā€™t use Giga on your sequencer PC if possible.


    It is true that you want mutliple PC for VSL but, on the other hand, I used to run GS on the same machine as my sequencers with good results. I did run into performance problems under heavy loaded situations but you can still get a lot of work out of that sequencing PC since sequencing is only a moderate load on a PC. The extra performance is there if you want to use it.

  • mathis, that fits pretty good to the numbers you posted. 256 MB need 8.3 s, thats ~30 MB/s (using acceleration caused by the cache on the disk) - a sustained datarate of 11.6 MB/s would not make me very happy and calculating your voicecount i read an effective throughput of 9 MB/s.
    reading about an average seektime of 8.5 ms this sound *normal* ...

    the block-size (for data access) is not the same as the sector-size (used formatting a disk). i'd recommend to format harddisks used for audio with 4096 B/sector - this gives you less overhead than the default 512 but would still allow to use the built in defragmentation of W2K/XP (which does not work for higher values for sector size)

    christian

    and remember: only a CRAY can run an endless loop in just three seconds.
  • last edited
    last edited

    @evanevans said:

    How inflexible. And all that hardware. geesh. Too bad you aren't into Apple computers.
    Evan Evans

    Hey Evan,

    Are you running the full Pro Edition successfully on your Mac?

    JHanks.