Vienna Symphonic Library Forum
Forum Statistics

185,278 users have contributed to 42,390 threads and 255,479 posts.

In the past 24 hours, we have 2 new thread(s), 21 new post(s) and 52 new user(s).

  • Quad Core Intel's are here!

    Hey everybody check out there is an interesting article there about the quad core cpu's and mac pro computers (and of course this is interesting for PC users also). COOL!

    Miklos. [:D]

  • You know, I was wondering why no one brought up that Apple should be able to use AMD chips too if they want. At least I assume that if Windows can run on both, OS X can.

    The next question is whether all those cores will be accessible to plug-ins and instruments. Of course the idea of a machine that's twice as powerful as anything we have today with the ability to access as much RAM as it can eat is pretty appealing. But wil it be?

  • According to most tests, except if your apps are programmed to use the full power of 8 cores, the performance gain is minimal, and certainly does not justify the price.

    I know Logic had to be updated to use 4 cores, and hopefully it will use 8 core as well. But what about the other software? What about launching multiple instances of Stand Alone VIs, is having 8 cores available going to help us at all?


  • My limited knowledge: the processes which are controlled by the OS, in OSX will be distributed over the cores. In logic, one process goes to one core, so one instance goes to one core, the next could go to another, it's up to Logic to distribute the load. Overall, it's not the best system yet, but it does work on dual, and quad machines. I assume an 8 processor machine will be no different. Of course, after the absolute scandal of the Quad owners left out in the cold with Logic for about 9 months while Cubase was updated from the beginning, I don't think anyone should necessarily assume trustingly that Apple will update logic for the 8cpu's on time until they actually do it. Even so, this is really great news. We can fairly saftley assume, I think, that by the time this all comes together that there is a good chance of a 64 bit os, 64 bit VI and 16 GB 8cpu 3 or 4ghz Macs all working together with Logic and Cubase inside 12 months. That means, big production and mix capabilities on one machine in realtime. i don't think it's unrealistic to consider that in 24 months that could quite possibly double, along with machines with bandwidth and IO to cope as well, and within 36 months, a good change of ram sticks doubling and quadrupling, - not unrealistic to see 128Gigs of RAM in one machine. That coupled with the huge cpu and efficiency of the software, you could do the full orchestra in one machine with plenty of space to boot running lots of mix plug ins as well all in real time with no freezing. COOL!

  • Unless there is signficant improvement in disc transfer times 128GB of RAM would take an awful long time to populate. Imagine switching between 2 music cues taking 20-30 minutes. The hard discs seem to max out at 60-70 MB/sec for large file transfers (considerably lower when thousands of header files are loaded) and whilst raiding offers significant improvements this doesn't translate to VI load speeds. At current real life load speeds filling 128GB would take over an hour!

    So unless the Software and Hardware interaction shows considerable advances in te next few months the limitation will become load speeds rather than RAM or CPU.


  • It's fairly easy to predict that computer are going to improve in the coming years [:D]

    10 years ago, a high school friend told me that he heard about a video card with 1 GB of internal memory! Of course, at the time, he was just completely bullshitting us. The fact is that, today, there are now video cards with 512MB of memory. So he will surely be right sooner or later [:)]

    But as much as it is fun to guess what the world of computers will look like in 2, 5 and 10 years, it's absolutely useless in terms of what is the best investment *today*...


  • A major issue with any multi-processor system is bus contention. Adding cores can only really be beneficial if the memory or I/O buses have unused communications capacity when the existing complement of processors is making full use of them. Otherwise, it's like trying to increase the traffic-carrying capacity of the road network by adding vehicles; once the roads are full, more cars don't equal more throughput.

    Given the description of the quad-core chips in the referred-to article, that they are pin-compatible replacements for dual-core chips, then it doesn't surprise me that the performance boost is substantially less than 100%, since it's unlikely that the existing buses could carry twice as much data as the dual-core chips need.

    Intel found that going above about three processors on a single bus was not just diminishing returns, but could be counter-productive because of the overhead in bus arbitration that happens, due to the amount of times each CPU is denied access to memory or I/O because another CPU is using them. This is why their Xeon architecture pegs out at four, as I understand it. (AMD get more performance by taking the multiple-bus architecture from DEC's Alpha chip, which is more scalable because it has parallel channels to memory. But that's more expensive to make, since every motherboard carries the cost of all those duplicate connections.)

    One potential benefit of adding cores, though, is a rise in cache coherency. Each CPU has some cache associated with it, either solely for it or shared with other cores. If you have more tasks active on the machine than there are CPUs available, then those tasks will periodically be stopped, taken off the CPU they were running on, and another program given access to the CPU instead. When the first task is given the chance to run again, perhaps only a millisecond or two later, the cache of data it had built up could have been scribbled on by the other task. This can completely annihilate the performance of both tasks compared with running them isolated from one other.

    A separate problem is when the first task gets allocated to the first available CPU, and it turns out to be a different one from the CPU it ran on a moment ago; in this case, even if its cached data were still available, it's not accessible from the new CPU. 'Processor affinity' is a technique used to try to get around this, where long-running tasks are automatically assigned to the same CPU when possible. I don't know how well OS/X handles this, or whether tasks like individual instruments know how to ask for it, but that would certainly be a factor in getting the best out of any multi-CPU machine.

  • last edited
    last edited
    Then why not just run them in parallel like Digidesign does with TDM? You'd probably have to bounce in real time, but other than that...

    Note that I have no idea what I'm talking about, if you haven't noticed - I'm just musing out loud.

    @Another User said:

    We can fairly saftley assume, I think, that by the time this all comes together that there is a good chance of a 64 bit os, 64 bit VI and 16 GB 8cpu 3 or 4ghz Macs all working together with Logic and Cubase inside 12 months.

    It's interesting that there are two opposing lines of hype going around, which might be telling. One was the graphic behind Steve Jobs at the Apple Developer Conference with 64 bits shouting all over the place, when they announced... Leopard? (This cat thing is getting a little naff by now; I can't even keep track of which ones Tiger, Panther, and Jaguar are at this point.)

    The other line of hype you hear a lot is "what does the mainstream user need 64 bits for?" That makes me wonder, because I read it a lot and it smells like "positioning." [:)]

    Cakewalk and Steinberg were hawking 64 a couple of years ago (but they aren't really doing that now), and some audio hardware companies have 64-bit drivers, but in general I haven't heard software companies tout this as much as you'd expect if it's really close on the horizon.

    Meanwhile the price of the RAM these machines are using is a major stumbling block.

  • For the 2X dual cpu machines, Apple already has double bandwidth bus architecture, in short, sufficient bus for each CPU core... naturally as I said one would expect that they expand the bus architecture to accommodate the extra cores.

    RE hard drives: if the drives are slow loading just buy more drives! if you have 10 X 20 gig drives on a PCI card, 128 GIgs is going to load REAL fast don't worry. Just distribute the instruments over the drives.

    RE multi cpu efficiency: since apple and windows are now on even processor playing field, I think the competition will now continue to be viewed from a perspective of efficiency of the OS, and this will come down to core tasks such as management of multiple cores etc, at least to a significant degree. In other words I think we can say that those types of issues will *at least* be more or less properly looked at in both camps.

    Of course it goes without saying that it is the overall design of the machines that is what produces the results, including the OS. In that I think proper investment will be made largely because of the fact that competition is a big part of the game and they have to compete with regards to efficiency on both platforms on more or less similar hardware (at least cpu's).

    I don't know about PC hardware but Apple has already taken care of the bus issues with regards to present machines, they have AMPLE bus architecture for both the multiple core cpu's and RAM access as well as hard drive access. I assume future machines, with more cores, will automatically take this samee design principle, which is inescapable when dealing with multiple cores, into account properly since they are already doing so now, and obviously realise the benefit of it. OF course there may be limitations in machine design, but since it's obvious that multi core is the way the industry is going, it's probably fair to say that mother board manufacturors and the like are already well aware of and planning for the likely futures of 8 cores and so on.

    Just my thoughts on it anyway.


  • There will always be a bottleneck somewhere: either buss speed, processor speed, available RAM or disk throughput. And as soon as computers get fast enough to run 800 voices with 50 Vienna Instruments on a single computer, I'm sure VI2 will be announced. [;)]

    But this makes me want to upgrade my system, at least to a Core2Duo with an upgrade path to a quad.

  • last edited
    last edited

    @Nick Batzdorf said:

    Cakewalk and Steinberg were hawking 64 a couple of years ago (but they aren't really doing that now), and some audio hardware companies have 64-bit drivers, but in general I haven't heard software companies tout this as much as you'd expect if it's really close on the horizon.

    Same here.


  • last edited
    last edited

    @mpower88 said:

    RE hard drives: if the drives are slow loading just buy more drives! if you have 10 X 20 gig drives on a PCI card, 128 GIgs is going to load REAL fast don't worry. Just distribute the instruments over the drives.


    Hi Miklos,

    The VI's appear to load at between 15% - 25% of the rated transfer rate of the hard drive. Also when a number of drives are utilized these figures do not scale at anything like 100%. I can currently load up a single 2GB file into RAM (from a 4 disc SATA raid) in a few seconds but to load 2GB of VI stuff takes a number of minutes.

    The net result is unless there is a significant speed increase (x20) in speed of file transfers of the VI type switching between songs would become impossibly slow with RAM amounts approaching 100GB.

    However what would speed the whole process up immensly (even now) is if the VI's (when used within a host) could scan current RAM to see what was already loaded - like the "keep common samples in RAM" function in Logic.

    .... and what would be even better if a similar function could occur even if the song you were opening was replacing one that was being closed.


  • Here's a thought (and pardon if it's not reasonable):
    In Digital Performer you load VI's into a "VRack" which allows you to switich between sequences in the program with no waiting. Of course when you close or open up another file with say, another film cue and Qtime file etc., everything reloads and you sit and wait.

    What if the program itself allowed a VRack type host to remain open or at least allow samples to remain in ram (as mentioned above) untill you actually quit that part of the program?

  • This may be of no help, but I just thought that I'd share my brilliance with you in case there is an application for it in your case....!

    I was finding that when changing projects it was taking 15 minutes to load samples and as soon as I wanted to switch projects they were all being dumped and I was having to load again. I am using FX-Teleport as a host and this is where my genius comes in. I load my whole template for each PC into Forte on my DAW, but using the LAN version of VI. In other words I do exactly the same thing as loading in Nuendo, but instead I'm doing it in a VST host. Now the clever bit. When I open my project I load exactly the same samples, but this time using Nuendo as a host. Of course VI knows that these samples are already loaded into FXT so it doesn't load them again (actually it takes about an extra 79MB up). Now my template is up and running immediately, I can change projects, close Nuendo and all the samples are still loaded when I open the project up again. This means that I can change projects instantly and have all my samples available without loading. I just have to make sure that either Nuendo or Forte (or both) is open ay all times.


  • Okay so basically you load all your samples outside of the DAW in a VST/AU host so they remain in RAM but you don't use the external host. Of course that begs the questin of why not just use the host program. Fotr me the answer is I have never been able to run Plogue outside without performance issues.

    Do I undestand your idea correctly?

  • Loading in the DAW is much more convenient when it comes to mixdowns, using plugs etc. I can also have almost unlimited VST outputs, whereas using the Host I'm limited to what my hardware has.

    Regarding performance issues in the host, as I don't have any MIDI input or audio outputs enabled there is no chance of a problem


  • Right, of course.

  • Julian sounds like cpu will fix that eh?

  • Nick: why would OSX be 64bit in Leopard, with 64bit Apple apps coming soo after? Why would they bother if it wasn't the direction of the future.The fact is that 64bit is necessary for the coming years because of hard drive/storage and memory access. It's like people used to say why would you record in 24bit when you're mastering to 16bit! What a rediculous argument, yet people made it earnestly everywhere I went, but I always said you can't compare 24bit to 16bit even on a master.


  • last edited
    last edited

    @mpower88 said:

    Julian sounds like cpu will fix that eh?

    I think that will help. When, for example, you are doing a Retrospect back-up and large files are being archived the data transfer rates are as much as a hundred times quicker than when the system, library and cache folders are being backed up when tens of thousands of files are being copied.

    So the consideration with VI's is not only the RAM amount but the number of files (headers) and currently the sheer number of files requred - which is a direct by-product of the quality/flexibility of the library - appears to be the limting factor in load times.

    So although many users (me included) want to load more VI instances and larger matrixes, once we head south of 10 GB's of RAM I suspect the limitatons will start to come from the drive/cpu access and load speed. I'm not an expert but it is possible CPU increases will help, alonge with I suspect further, VSL software evolution.