Vienna Symphonic Library Forum
Forum Statistics

194,290 users have contributed to 42,914 threads and 257,948 posts.

In the past 24 hours, we have 2 new thread(s), 17 new post(s) and 85 new user(s).

  • "...the CPU wasn't gagging."

    By comparison, my CPU meter shows 35% use on the left (first core) and about 20% on the right (second core) after I hit play with nothing performing. And that's with only one GB in VSL server and one-half GB in Logic, and a remaining 3.19 GB reported as "free" in Activity Monitor.

    When I loaded RAM closer to Nick's numbers, I was seeing a 50 to 60% hit in *both* CPU's as soon as the transport was running without any music. It was a surprising loss of CPU headroom just because the RAM was loaded, irrespective of playback limits, bus speed, disk speed, etc. I was paying a CPU penalty just for getting more sounds online. Within a handful of cross-faded layered tracks, or after maybe ten velocity-switched tracks, Core Audio overloads would stop playback.

    I think in 12 to 18 months, this will all be moot.

  • Yes, I'm looking forward to hardware and software improvements myself. An Intel tower or two are on my must-haves for 2007.

    I'm also eager to see how Leopard impacts Cube updates...

  • last edited
    last edited

    @JWL said:

    ...if the CPU won't increase in speed substantially, then Apple opts to put more of them in one machine. Very odd in a way.

    But other things will have to improve to get the best out of VIs-- hard drive speeds and transfer rates as a standard would be my first desire for improvements. Bus bandwidth would also be among the most important improvements.

    While I'll never expect that one computer will run much more than a handful of instances of VIs needed for a full orchestral realization, it remains a little strange that more can't be done on one computer than is currently possible.


    The "multiplying" of CPUs seems pretty much unavoidable to me -- unless they come up with a fundamentally different paradigm for processor design. There are physical limits to how far the current approach can go (power consumption vs heat dissipation), which we first saw when the early 90-nanometer chips hit the shelves. It was a bit of a fiasco, really...

    But the RAM thing is still a strange and interesting problem to me. I think you're absolutely right about hard drive and bus speeds being the primarily focus for the work we do. Some of the advances in flash-based drives, like Samsung released back in the summer (I haven't looked into this lately... any improvements?), are quite promising. IMO, we're kind of looking for the wrong solution by wanting all our samples to be loaded into RAM. The speed of RAM today is necessary for *calculations*, but is basically overkill for playing back samples. Yes, when you've got loads of samples to play in that time it's a somewhat different story, but that could also be dealt with using a flash-based HD on an appropriately designed bus (don't know the numbers, but PCIe is probably already capable). But if you keep in mind the fact that all the samplers we've been using for the last few years have been "streaming" samplers anyway, then it's quite clear that it's only the first few milliseconds of playback where there's a major issue. With a device that could deliver in those first milliseconds (i.e., the next-to-zero seek time of a flash-based HD) we should be able to stream to our heart's content. Anyway, I don't think loading more and more into RAM is the only, or the best, way of improving sample-based work.

    The other solution, which I mention whenever I get the chance, is to merge the sequencer and the sampler at the lowest level possible. I keep pestering everybody with a VSL-designed and created notation-based sequencer because I genuinely think that will be the best solution. The fact is that, except for those times when we're literally *playing in* a new part, we absolutely *do not* need all those samples to be loaded in RAM! For the most part, the incredible speed that RAM is capable of is wasted on storing samples for music that is played back in precisely the same way everytime we hit the "play" button. If VSL authored a sequencer, even if it wasn't notation-based, they could address the preloading of samples based on the information contained in the sequencer track. In the simplest possible model, they could create a system whereby, once a passage is recorded, a sample list is created, with timestamps for sample preloading. The preload timestamps would be placed an appropriate period before each note, say 50 milliseconds (overkill, probably) and the sample head would be buffered in that window. I built a system like this in MaxMSP and Java which actually worked quite well (though, not being a sequencer itself, it needed to analyze midi files to build its sample list), and those are definitely not the best-suited languages for this kind of work! (In a more sophisticated version, the timestamp could be calculated dynamically, in order to more efficiently distribute the workload in busy passages.) But of course, if you don't know what note's coming, then you can't buffer the sample to play it, so unless VST is given a view on the entire sequence (does VST 3 support this?), then we won't see this *extremely* simple solution implemented, since plugins remain completely blind to the future.

    Anyway, the point is that unless we plan on having 50 keyboard players on stage, all playing a single VI each, *live*, then there's absolutely no need to have all those samples playing from RAM on each and every pass.

    J.

  • ...of course, i realize that the "solution" I proposed above is basically the same as "freezing" tracks, and that then the real issue would be reloading the samples for live playback/recording. But that's where a flash-based drive would come in, making the buffering of samples for live playback *much* faster.

    Obviously, I don't have a final solution. I'm just dreaming and hoping, like everyone else...

    J.

  • i'm considering the hybrid-drives (legacy rotating magnetic disks combined with flash) as a dead end even before it is released. flash-memory has a limited number of write-cycles (80.000 - 150.000) and will become unreliable after a certain number of cycles. very nifty algorithms had to be used to spread the write-operations evenly across all available memory cells.

    more interesting would be millipede-storage (originally announced for past summer) http://en.wikipedia.org/wiki/IBM_Millipede offering data throughput in the gigabyte range - an ideal medium for read-only data like sample libraries.

    we will have to wait how latency values for millipede will be in real life, because latency of flash-drives are great for little buffer-sizes, whereas the size of storage is too limited (currently 8 GB)
    christian

    and remember: only a CRAY can run an endless loop in just three seconds.
  • Yeah, the millipede looks great! I think you pointed me to that article once before... (It sounds familiar anyway.) I didn't mean to suggest that flash-based drives were necessarily the best way to go, just that something other than rotating magnetic discs would probably pose a way forward. But it's nice to have some details! Thanks.

    J.

  • Plowman, if I remember right I had a sequence playing when I did one of those tests. (This was a few months ago now.)

    I wonder why your machine isn't behaving the same way. And to tell you the truth, I'm not as excited about 64-bit access as I once was, because I'm able to access all the affordable RAM I need to access right now. Maybe the Intel Mac RAM will come doown, but somehow the idea of spending $250/MB (vs. the $75 I paid for the 8GB in this machine) to acccess still more RAM on a single machine is uninspiring.

    A 2x2.5 G5 with 8GB and a SATA card for more storage may be 60 computer years old (1 year = 20 computer years), but it's still a very, very serious DAW. I bought this machine in April 2005; historically machines have been feeling very old by the time they get 1-3/4 years old, but not this time. I've been upgrading machines every couple of years since the mid-'80s, so this is quite a change. Part of the difference is that we're not using just one machine anymore, of course. And then Mac World is coming next week, and NAMM the week after, so it's also possible that this will sound silly. However, the point remains.

    In any case, I like running stand-alone instruments outside the DAW on the same machine very much. To me it's not a disadvantage compared to having everything inside the DAW.

  • I have a dual 2.0 with 4.5 Gb RAM and I am running out of cpu before running out of RAM, so I optimize, freeze, and then I run out of cpu. The weird thing is that the VSL server is running clock cycles more than Logic even when the tracks are ALL frozen. I have to unload the VSL instance from within logic to get the cpu clock to go down in activity monitor. Is that normal?

    For me the main issue is, yes, you can do a lot with this machine but performance is a big part of it, and to be able to play in the parts and have a decently low latency and hear what you are doing, you have to have it set to 256 buffer with small process buffer, that works nicely even though it could be better - good enough to play the parts. Of course running the machine like that means you have significantly less cpu overheads. When mixing, I reload the proejct with 1024 buffer of course, when I'm finished playing in parts, so that is fine and then you can go and load reverbs etc (whereas I just use a "cheap" reverb sound for composing to save on cpu and get more sequencing done).

    Even though my machine isn't fast enough to use it - what is this I'm hearing about using other instances of VI as standalone does this really work nicely?

    Miklos.

  • last edited
    last edited

    @mpower88 said:

    The weird thing is that the VSL server is running clock cycles more than Logic even when the tracks are ALL frozen. I have to unload the VSL instance from within logic to get the cpu clock to go down in activity monitor. Is that normal?


    I'd be interested to know the answer to this question as well. In my experience, it seems as though the VIs run at full CPU whenever dsp is running -- so in some apps that means a high-CPU idle, while in others it means high-CPU any time the transport is running. They do seem to be quite efficient, but I'm curious to know whether they mute themselves when they are not processing audio (or could they be made to do so?).

    J.

  • Okay, I guess I need to install the V.I. player on my dual 2.0 and see whether I have the same problem. You can see the Activity Monitor dumps above to prove that my 2 x 2.5 is perfectly happy working this way.

    Were there any architectural changes in the machines? As far as I know the only difference in the 2 x 2.5 is that the CPUs run a little faster and it has a liquid cooling system.

    Weird.

  • Hi Nick,

    I, too, would appreciate it if you would try that test with dual 2 Gig machine. That's the machine I have and I really, really want to get sorted out which way to go with it.

    I've spent several days now going over options of combinations of ways to go. Even though Mac World & NAMM are happening right away, knowing a clear assessment of my dual 2 Gig machine would certainly help with any decision to be made.

    I want to thank you in advance for taking the time and effort to check this out. Also, where do you think would be the best place to look for older dual & quad G5's?

    Best regards,
    Jack

  • Jack--

    I would start with Apple's own refurbs. You'd have the confidence that they'd been checked out and the benefit of having some sort of Apple Care.

    There are other options, but I'd at least start with Apple, fwiw.

    For example:

    Refurbished Power Mac G5 Quad 2.5GHz
    Two dual-core 2.5GHz PowerPC G5 processors
    1.25GHz frontside bus per processor
    1MB L2 cache per core
    512MB of 533MHz DDR2 SDRAM (PC2-4200)
    250GB Serial ATA hard drive
    16x SuperDrive (double-layer)
    NVIDIA GeForce 6600 with 256MB GDDR SDRAM
    Learn More
    • Save 19% off the original price
    Original price: $3,299.00
    Your price: $2,699.00

    Estimated Ship:
    Within 24 hours
    Free Shipping


    Add
    AppleCare Protection Plan for Mac Pro/Power Mac (w/ or w/o Display)
    extends the complimentary coverage on your PowerMac to three years of world-class support and service.
    Learn More
    Price: $249.00
    Estimated Ship:
    Within 24 hours
    Free Shipping


    Compare that to cdw.com, formerly macwarehouse.com. They were selling PPC Quads at full price while they had them. I just checked that site and they don't even show them anymore.

  • "...I'm not as excited about 64-bit access as I once was, because I'm able to access all the affordable RAM I need to access right now."

    I noted your coolness to 64-bit as early as last year's NAMM, I believe. I think the "Batzdorf Method" shows a healthy non-emphasis on the dream of one computer doing it all. Over the last year, I've stopped holding that torch. Yeah, it'll come one day -- but I've stopped actively waiting for it.

    "...if I remember right I had a sequence playing when I did one of those tests." Again, that amount of RAM you've loaded is clear and inspiring. But I'd like to know what kind of CPU price you're payng just for the loaded RAM when the transport is running without anything playing back. I've seen your System Memory posts, but not CPU readings. But only at your convenience.

    "I've been upgrading machines every couple of years since the mid-'80s, so this is quite a change." Agreed. About '04, we crested a VI threshold. The discussions I read now have more to do with ease of use.

    "Is that normal?" I don't know if it's normal, but it's what I face all the time.

    "...whereas I just use a "cheap" reverb sound for composing to save on cpu and get more sequencing done." In my search for more CPU, I was alarmed to realize how *little* Space Designer was asking of my computer. I had wrongly assumed that it was a major culprit. Finally in a diagnostic session, I removed them from my song, and the CPU savings was negligible. It led me to wonder if the caution against such things was a bit out-dated, like the way we use to worry about MIDI choke. But your results may vary.

    "I would start with Apple's own refurbs." Yes, me too. I bought my Mac used from Sweetwater and still got some service and return protection. But the sad truth of the Mac universe is, there aren't a whole lot of deals out there. As Nick once pointed out, even after new lines debut, "old" Macs might drop about 150 to 200 dollars. Nice, but sometimes not worth the wait and unlike the jaw-dropping deals we can see in PC's.

  • Thanks JWL & Plowman,

    I'm going to NAMM and plan to mine all the info I can finagle out of all the manufacturers whose VI's & plugins I own or plan to buy in the immediate future.

    I've made up crib sheets of pertinent questions regarding compatibility w/ Mac Intel, what it will take to migrate their software to another computer (really my biggest concern regarding setup of a new system or network) and what the major library manufacturers need in terms of computer resources. This way I can figure out how to spread the programs over multiple cpu's and with informed consideration of the eventual upgrade to the 64-bit future.

    I'm also bringing a video camera to document what I see & hear so I can remember & recall it accurately.

    It's really great that NAMM follows on the heels of Mac World so closely. There should be a lot of answers available that right now are only available by reading tea leaves. I'm with those you believe that the single machine, 64-bit heavenworld is a ways off still. I use Pro Tools HD & Logic and want to see them split off onto their own platforms. Of course, there are other complications also.

    Anyway, at that point I'll make my decision about which way to go to relieve my poor little dual 2 Gig from its current misery.

    I'm really looking forward to this trip.

  • Much as I'd like to, Plowman, I don't think I can take credit for this method, and I don't think you noticed me being cool to the idea of one computer being all we need at all. I'd like that very much. It's just that I don't see it happening soon - although it's getting better.

    If you look at the Activity Monitor dumps above, you can see the CPU percentage. The second one with all kinds of stuff going appears to be at about 80% - but that's of one of the two CPUs. i'll load those sequences when i get a chance and see what the CPU reading is with the transport stopped.

  • This topic demonstrates what I was fearing and explaining on this same board a few weeks ago. Going "full" 64-bit will *not* automatically give us the ability to use 16+ GB of samples.

    I suspected that there would be a tremendous amount of rewriting before that would happen.... it could turn out that I feared right!

    Jerome

  • Well, either way, I think it's fair to say that the industry would do well out of us consumers as a whole if it did certainly provide greater memory access for graphical, video and obviously music applications via whatever means works best. I don't see why there can't be multiple VSL servers controlled by a central administrative server that does not access the memory of the other independant apps, then we could have as many VSL servers running as necessary to utilise the amount of RAM in the machine. Then you could use 16Gb easily no problems. Of course I have no actual idea abou the technical implimentation of this but it seems to me that you could do it, after all, games do this: multiple servers controlled by a central server, only in this case, you have each server as a memory access device which accesses 1.2 Gigs RAM, not more, and once it is full it starts off handling memory holding to the next server (or rather than admin server does that).

    (edit) just to add of course the host app (logic etc) would interface with the memory via the admin server. The admin server would be like a router between the memory holding servers and the host. Might add some latency but on a fast machine, wouldn't matter. On a quad intel, with 16gigs of ram the problem is ram access not cpu anyway, you could just run your system on a lower buffer setting -seems worth it to be able to access all that ram.

    Miklos.

  • Anyway, my personal main problem is not enough cpu... The ram optimiser and freeze functions combined mean you can quickly enough optimise VI and I find it works excellently, not too hard to re-open , reset, re optimise when necessary too - with this feature you can get quite a lot out of a single instance as it is... The problem then becomes that each instance uses an amount of RAM in the server so you have to be efficient in using each instance to it's fullest in terms of loading sounds... Having multiple VSL servers seems like it would work on the surface but of course I'm sure there is a complex programming reason why it would not work well. Hats off to the masters at VSL for making VI as superb as it is. I for one am not complaining only wishing to be able to use it to the maximum capability or at least have that freedom when writing.

    Miklos.

  • last edited
    last edited

    @mpower88 said:

    .... On a quad intel, with 16gigs of ram the problem is ram access not cpu anyway, you could just run your system on a lower buffer setting -seems worth it to be able to access all that ram...

    Miklos.


    That sort of reinforces something that has been at the back of my mind for a long time.

    We have clearly reached a technological impass of sorts when some aspects of hardware meet or exceed the demands of software while other necessities of code structure prove to be counter productive. As long as VI users are willing to buy and sustain a network of comptuers then it may not be a problem.

    But at some point, something has got to give-- the need alone will force a change sooner or later-- always does.

    So maybe users of orchestral VIs represent a minority of computer users, but the term Pro User as far as the Mac Pro is concerned is in a strange way doing as much to alienate users as it attracts them.

    The Cube is an awesome accomplishment-- its sound exceeds anything believed to be possible on a computer, imho. We're just in a bit of a 3GB limbo for a while yet.

  • To me a network on computers is unfeasible and a pain to set up and administer and work with. I need that aspect to be as simple as possible when writing.

    Perhaps we simply the problem then solutions will follow from that.

    1. We need to access more RAM from VI - lets go with todays maximum - 14Gb (plus 2 for system).

    What can be done to make this happen in the near future? If people are able to run multiple standalone VI's and access more RAM then *surely* this is not as impossible as it seems, it's only impossible with the existing set up, can that be modified so that we have multiple VSL servers running? Or one VSL server with memory nodes as separate apps, that it uses to store memory in RAM? That would solve all the problems NOW and no need to go 64bit at any stage, and would work with existing hosts and OS.

    Miklos.