Vienna Symphonic Library Forum
Forum Statistics

194,165 users have contributed to 42,912 threads and 257,926 posts.

In the past 24 hours, we have 1 new thread(s), 13 new post(s) and 85 new user(s).

  • How to run the symphonic cube on a lap-top in real time...

    Now I know that some of my peers here (dare I even call them that?) have 6x Macs and don't really care about running VI on a single machine.
    I, however, am a bit anal and love all my projects being saved as part of the same file and residing on one machine. Plus, with my day-time IT developer head on, I love the challenge of making VI better [:)]
    So I have (yet) another suggestion for how we can achieve this.
    Currently VI preload is very large, thus the limitation for me is less the polyphony, more the amount I can physically load into memory.
    My idea is therefore this:
    Why not have a 'sample quality' slider on the VI. This value can be set to 1,2,4,8,16,32 etc. (where 1=current situation).
    Depending on this value, VI will (pre) load every nth sample into memory or only play every nth sample when streaming.
    Thus, for 'real-time' composing, I can set this value to be, say, 8. This results in every 8th sample being loaded and repeated 8 times before skipping 8 samples and playing the next sample in the same way. The pre-load memory shrinks by the same factor of 8x, as does the quality. What you gain in quantity, you lose in quality. Which may be fine for composing. After all, you would happily listen to an mp3 recording of an orchestra on your ipod.
    The CPU would be freed up cos the CPU is able to repeat the same sample and has less demand to pull from disk.
    For export, you would want to always render to the full quality so provide a facility to always render at full quality upon export.
    Of course, this feature would not be needed by some. Others of us could use this to cram the entire cube onto a lap-top if need be, trading quality for quantity whilst we compose on the move.
    What do people think? I can't imagine this being particularly difficult to implement and would be a god-send to those of us unable or unwilling to buy multiple machines.

  • paynterr, i'm somehow amazed watching your now 3rd or 4th post with suggestions how RAM usage could be reduced (every approach to this is consistent in itself and admittedly interesting) but this is nothing development of VI would or can follow. such *preview* modes would inherit too much compromise and side-effects to gather a satisfying result.
    could you actually imagine a legato-sample pitched up or down 4 halftones (had to be a kind of dynamic pitching from start to destination tone).

    IMO there is no sphere like sampling where the word is right that truly: size matters. unfortunately for some preconditions.
    christian

    and remember: only a CRAY can run an endless loop in just three seconds.
  • All the practices I mention are all used in software development elsewhere. VI is no different. I'm merely offerring you my advice as an IT consultant for free [:)]
    If you read my post again, you'll see that I am suggesting 'missing out' on "samples", not "notes". My defintion of "sample" is a minute fraction of a note, not a recording of a note. A 32 bit number in other words. There are 100,000s of these samples for every note as you know. 44,000 per second in the case of CD quality. In effect, I am talking about dynamically lowering the quality of the samples to the gain of the quantity of samples that can be loaded into any given environment.
    So missing out half the samples would reduce the RAM pre-load by 1/2 and allow 2x as many instruments to be loaded, with the downside of reducing sample rate to 22k. 4=11k etc. But since most people seem to be able to fit the library on 4 good PCs, I imagine 11k would be the lowest you'd have to contend with. Of course once VI make pre-load even lower, you could perhaps live with the sample rate being just halved.
    Yes - that reduction in quality may not be everyone's cup of tea, but it would suit others and would be easy to implement and a great way of quick composing and getting a 'good enough' idea of how the composition will sound when rendered properly. I challenge people not to still 'enjoy' an 11k recording of an orchestra?

  • last edited
    last edited

    @paynterr said:

    It concerns me that you don't have the foresight to see the legitimacy of some of these ideas and makes me wonder whether you should let someone else develop the software and concentrate on the samples? Sorry to be rude, but that is the conclusion that you have to draw. All the practices I mention are all used in software development elsewhere. VI is no different.
    If you read my post, I am suggesting 'missing out' on samples, not 'notes' as you indicate. There are 1000s of these samples every second. 44,000 in the case of CD quality. In effect, I am talking about loweing the quality of the samples to the gain of the quantity. So missing out half the samples would 1/2 the RAM pre-load and reduce sample rate to 22k. 4=11k etc.
    Yes - not everyone's cup of tea, but easy to implement and a great way of quick composing and getting a 'good enough' idea of how the composition will sound when rendered properly.


    But why do that in the first place Paynterr?

    As you have said, there are software developments elsewhere that follow your methodology. And i find it curiously amusing, that you would 'strongly suggest' VSL stick to samples and give up the software development, with the implication that recording is their strength, and software developement is not.
    There's been many here and elsewhere that have praised the advanced developments contained within the VSL VI concept. And the detractors tried and failed last xmas when they tried to present the VI as an expensive 'white elephant'.

    Originally the VI sat on top of the screen. People noted this was not so conducive to workflow, so the company responded. Problem solved. Quickly.
    Then, copy protection reared it's head. And with the latest incarnation from
    S/Soft, no doubt with 'determination' from the team at VSL, the load times have reduced considerably.
    I think you're coming at this from the perspective of comparison, and in reality there is none. VSL is a different product, different concept, different vision to other libraries. EW have their philosophy, likewise Sonic Implants as well.
    Each one has features that users will either like or not.
    But your post implies VSL is somehow 'missing something', and only you can see it.
    I wonder how much time Cube owners have saved with the VI, and how much less work they have to do polishing samples to a pristine quality, because that quality is already built in? Why would you want to reduce that sample quality in the first place? That would defeat the whole purpose of buying THE top end library on the market. RAM issues are about having the right hardware, not software. And Kontakt and Gigastudio owners will no doubt confirm the hefty requirements of those particular programs when it comes to RAM.

    Given the quality of the recordings, the flexibility of usage, the purity and consistency of their quality control with the sound stage, and that unique and delightful VSL 'sound', clean and precise, coupled with a new method of delivery with the VI, i think its you who's lacked a little vision on this. And let's not forget the upcoming MIR release. (Sorry Dietz, for bringing this up.) And given the inherent usability of the VI, i think the team who've developed this are way ahead of you, and will remain so. They certainly HAVEN'T stood still resting on laurels, or as others have done, changed the colours on a GUI, and recycled old technology as a 'new breakthrough'.

    Why on earth would VSL want to stick a Ford engine in a Rolls Royce?

    I don't think VSL lack foresight at all. Quite the contrary.

    Alex.

  • Guys - relax... all I am suggesting are ways to improve the product (one that I own myself), reduce load times and improve the number of instruments that can be played in real time on a single machine.
    That is all...
    I am making no comparisons with any other instruments out there.
    Now - I've made my suggestions... the VI developers can now decide whether they are possible or not... and whether users would enjoy the ability to compose with the entire cube on a laptop on a plane?

  • I think Paynterr's ideas are interesting. As someone who is always bumping into the limits of computing power, I for one would enjoy some form of "draft" quality mode.

    Best,
    Jay

  • Thanks JBacel
    Actually 'draft' is a much better way to summarise my technical description above.

  • I appreciate what you're trying to do. Don't get me wrong. I would think every sample developer is hoping like hell that hardware catches up, so they don't have to consider compromises in quality. But, I know if i were a developer, the last thing i'd want is a 'lite' version of my premier product, with degraded quality due to reduced sample rate.
    And with the travelling i do, using a laptop has its advantage in portability and disadvantage in lack of oomph, so i have the same interest in seeing something fit in one box. (I built a big, durable, soundfont file some time ago to use within Sibelius, for drafting purposes on the move. It works, and as it's only a draft, i don't expect it to give me a 'finished' sound. Neither do i have to cope with hosts, or reduced playability due to a lack of ram resource.)

    Now, if you were interested in taking this further, why not ask for Opus '3', as a VI?
    Then you'd get top quality 16bit with a product that WOULD work in one machine, with the useability of the VI.
    It may or may not be possible, given the ongoing VSL project, and whatever else they have in mind, but i would think this would be a more practical way to go, for those of us who are on the move.

    Alex.

  • last edited
    last edited
    ok, i was understanding the term *sample* in the sense it is used in the VSL product description (violin staccato has xy samples, ect).

    if we look at it in a digital manner we have to clear definitions first:
    - the headers of a PCM/RIFF file are using a 32bit data format (this is the reason why a RIFF file is limited to 4 GB maximum, usually even 2 GB because of the difference between signed and unsigned integer)
    - the sampling bit-depth can be 8/16/24/32 bit (VSL is 16 bit currently, of course theoretically it can be lower than 8 bit but i'm sure we wouldn't like that, additionally 8 bit is stored as unsigned integer, wheras 16 bit as signed integer, 32bit would be floating point)
    - the sampling rate can be something (VSL is using 44100 Hz currently - /2 = 22050, /4=11025, /8=5512,5 ... oops, here we would leave the integer and we had to start rounding because i can't display a half sample)
    - PCM data ist stored interleaved in frames and each frame consists of sequential data for the used channels (2 in case of VSL because of stereo)
    i'm basically referring to this description
    - the Nyquist-Shannon-Theorem says that unambiguous reconstruction is possible if the signal is bandlimited and the sampling frequency is greater than twice the signal bandwidth. that's why telephone lines don't sound too good and i think a sampling frequency of 11050 would leave us with an unacceptable *quality*.
    one could possibly now *invent* (=interpolate) the missing samples to stay with 44100 or repeat the *reference sample* but i've never tried that and probably i wouldn't like to hear the result. do you have any examples for that to compare the quality using a human ear?

    based on this - where would you cut the amount of information into half (and further /4, /8, ect) without loosing an acceptable sound?

    another issue is the needed amount of preloaded samples. this is basically only neccessary to fill the various bufffers (soundcard, audio application, operating system, harddisk I/O) and *hide* the various latencies.
    128 kB (=32.768 samples 16 bit stereo) is a common value for most samling engines - ViennaInstruments needs far less (and is 24 bit!).
    256 Byte (= 64 samples 16bit stereo) is a common value for soundcards - many users need to use a higher value because their system cannot deliver such a data stream continuosly.

    any efforts in this direction (reducing amount of preloaded data) are useless IMO and stay theoretical unless we get storage media with an almost-zero seektime like compact-flash or solid state disks with enough capacity for a reasonable price.
    christian

    and remember: only a CRAY can run an endless loop in just three seconds.
  • (wrote this before the post above but will leave it here anyway)

    Well, I think the point is that we like the direction of VI and the interface, but unfortunately the hardware is not there yet to load and stream the entire cube on a single box with the full sample quality. Nor probably will it be for a few years yet.
    We also like the fact that VI sounds near-real when exported.
    So we are happy with the interface and happy with the end result.
    What we are not yet happy with is the efficiency of the work-flow that sits between the two. Granted VSL are working hard on this as you can see from other posts and that is great! But there is more we can do to make the VI engine even smarter.
    We don't need a different product like OPUS. The product and direction are correct.
    What we do want is the ability to be able to hear our pieces live as we compose them, live, with no pops of clicks and have a workflow that can run on a single box.
    What I have suggested above would achieve this. When you export/mix-down, of course, you export down at full quality... you don't care that it takes longer at that point... it is off-line... you can go and have a coffee...
    The bottom-line is that I believe an awful lot of the composers here would jump at the chance of trading real-time quality (ONLY DURING COMPOSING remember) for the ability to load the entire orchestral template in one DAW project.
    I can understand that with the care and attention VSL go to ensure we have a high quality, well-recorded product, that the notion of somehow making that product sound 'worse' than when it was recorded is sacrasanct.
    Of course, I would prefer everything now... but we are not going to get that.
    In terms of how easy this would to implement, I imagine pretty easy. The only issues I can think that would get in the way would be
    a) If the encryption of the files means that you need to have access to all data.
    b) how to make the DAW know you are exporting and therefore turn off the draft mode for that.
    We should try an experiment... export a track at the full bandwidth and trying dithering it down to different sample rates. I think we would find that we could quite happily live with much lower sampler rates during composition.

  • last edited
    last edited

    @cm said:

    ok, i was understanding the term *sample* in the sense it is used in the VSL product description (violin staccato has xy samples, ect).

    if we look at it in a digital manner we have to clear definitions first:
    - the headers of a PCM/RIFF file are using a 32bit data format (this is the reason why a RIFF file is limited to 4 GB maximum, usually even 2 GB because of the difference between signed and unsigned integer)
    - the sampling bit-depth can be 8/16/24/32 bit (VSL is 16 bit currently, of course theoretically it can be lower than 8 bit but i'm sure we wouldn't like that, additionally 8 bit is stored as unsigned integer, wheras 16 bit as signed integer, 32bit would be floating point)
    - the sampling rate can be something (VSL is using 44100 Hz currently - /2 = 22050, /4=11025, /8=5512,5 ... oops, here we would leave the integer and we had to start rounding because i can't display a half sample)
    - PCM data ist stored interleaved in frames and each frame consists of sequential data for the used channels (2 in case of VSL because of stereo)
    i'm basically referring to this description
    - the Nyquist-Shannon-Theorem says that unambiguous reconstruction is possible if the signal is bandlimited and the sampling frequency is greater than twice the signal bandwidth. that's why telephone lines don't sound too good and i think a sampling frequency of 11050 would leave us with an unacceptable *quality*.
    one could possibly now *invent* (=interpolate) the missing samples to stay with 44100 or repeat the *reference sample* but i've never tried that and probably i wouldn't like to hear the result. do you have any examples for that to compare the quality using a human ear?

    based on this - where would you cut the amount of information into half (and further /4, /8, ect) without loosing an acceptable sound?

    another issue is the needed amount of preloaded samples. this is basically only neccessary to fill the various bufffers (soundcard, audio application, operating system, harddisk I/O) and *hide* the various latencies.
    128 kB (=32.768 samples 16 bit stereo) is a common value for most samling engines - ViennaInstruments needs far less (and is 24 bit!).
    256 Byte (= 64 samples 16bit stereo) is a common value for soundcards - many users need to use a higher value because their system cannot deliver such a data stream continuosly.

    any efforts in this direction (reducing amount of preloaded data) are useless IMO and stay theoretical unless we get storage media with an almost-zero seektime like compact-flash or solid state disks with enough capacity for a reasonable price.
    christian


    Thanks for the detailed reply Christian. I think we're getting to an interesting place now...
    This is how I envisaged this working (and I'm fully prepared to be told that it doesn't work - it is just an idea)
    1) You have a 'quality' value that can be controlled on the VI which is defaulted to 1, but can be changed to 2 or 4. As you say, smaller values than 4 would not only probably sound bad, but would also start to cause rounding issues. as 44100/8 is not a round number. That isn't to say it isn't possible of course.
    2) When pre-loading or streaming samples from disk, you use this value to only load every nth sample value in the file, skipping the samples in between. In other words with a value set to 4, you take sample 0,4,8,12,16 etc.
    This in turn will reduce the pre-load buffer by a factor of n. In other words, with a value of 4, we could theoretically load 4x as many instruments into memory. What you end up loading is a less accurate sample waveform, but one that is much smaller.
    3) When playing back the sample (at 44100), you repeat the same sample n times so that it appears to be a sample recorded at 44100. At no point does the VI change its sample rate. It always remain at 44100.

    There are improvements on the above. As you say, you could interpolate the samples rather than playing 4 identical samples thus you still save on memory but synthetically create slighly different values for each played sample so that it sound closer to the original.
    After all, this is how computer animation works. Quite often frames are interpolated from the frames around it to speed up workflow.

    I would like to experiment with this... although I'll have to read up on how to read a wav file programmatically... what would be an interesting test would be to read a 44100 wav file into memory and write out different versions of the same file using different settings as above. Also try out the interpolation. I may look into this and get back to the forum and post the results for people's assessment of the quality.

  • I don't think working with 11khz samplerate would be fun, but I think most composers would be happy composing with 22kHz samplerate as "draft" mode if they'd get more bang out of the hardware. 11k audio bandwidth is not too bad, but 5.5k is really bad.

  • last edited
    last edited

    @cm said:


    any efforts in this direction (reducing amount of preloaded data) are useless IMO and stay theoretical unless we get storage media with an almost-zero seektime like compact-flash or solid state disks with enough capacity for a reasonable price.
    christian


    Nobody responded to my idea if USB flash drives wouldn't be suitable for sample preload buffering. Are they too slow?

  • last edited
    last edited
    for another reason i've done some research on such storage-types and found the ultimative values at 8 GB with 40 MB/s.
    whereas 40 MB/s is not so much more what modern harddisks can deliver 8 GB will be definately too little - a rough math would result in the need of about 24 GB for the current symphonic cube.
    not having spent a thought so far how such an additional buffer would fit in any sampling system ....

    there are of course a new type of solid state disks (also using NAND technology) with more storage space for datacenter and military use, but they are not faster and the *old type* using battery-powered RAM are really overkill.
    christian

    and remember: only a CRAY can run an endless loop in just three seconds.
  • last edited
    last edited

    @cm said:

    another issue is the needed amount of preloaded samples. this is basically only neccessary to fill the various bufffers (soundcard, audio application, operating system, harddisk I/O) and *hide* the various latencies.
    128 kB (=32.768 samples 16 bit stereo) is a common value for most samling engines - ViennaInstruments needs far less (and is 24 bit!).
    256 Byte (= 64 samples 16bit stereo) is a common value for soundcards - many users need to use a higher value because their system cannot deliver such a data stream continuosly.

    any efforts in this direction (reducing amount of preloaded data) are useless IMO and stay theoretical unless we get storage media with an almost-zero seektime like compact-flash or solid state disks with enough capacity for a reasonable price.
    christian


    Just reading this again... not sure if I mis-read your post, but you seem to be indicating that you would still need a pre-load of the original size?

    That isn't so.

    The approach I am suggesting would reduce the preload buffer size by the factor of 'n' (which is where the memory savings occur). The buffer would be a 'virtual' buffer in that it would only hold some of the data and synthesise the missing data that lies between the data points. Synthesise meaning that it would either repeat the same sample point 'n' times or interpolate from the points either side.

    I am presuming that the pre-load buffer is entirely controlled inside the C++ VI application to make this possible?

    Just in case this wasn't clear...

    I'm still looking into trying this out... I want to write a Java app that reads a 44100 .wav file and creates a new .wav file using this approach. I want to produce .wav files at different freq and using different 'draft' settings and see what they sound like.

    The proof of the pudding - in this case - is in the hearing...

    Stay tuned... (if you've not already tuned out)

  • Wtih a PCMCIA card plug in to an external box housing memory and/or hard drives, and with future laptops hopefully housing 16gigs plus of RAM - the technolog is almost there to easily run the full cube on a laptop, it must be less than 2 years away.

  • I just had this idea to circumvent the 2GB RAM software limitation when having more of RAM installed: Why can't the sampler developers use a virtual disc, a RAMDisc? Or several of them?

  • VSL forum is where the engeneering lives ... [:P]
    not a bad idea, although this is something the user had to configure in his operating system.
    AFAIK also a RAM-disk is limited to 2GB. to use several ones would need to have the available physical memory in your computer.
    so 1 RAM-disk 2 GB, kernel memory 1 GB (for windows, drivers, ect), user memory 2 GB (for samples) makes 5 GB RAM in total on your computer and you'd need an operating system which can use more than 4 GB (eg. windows 2003 server enterprise edition - happy downsizing, btw.)

    somehow it looks as if we'd need a vista64bit version to get somewhere, and then a 64bit VI could run on it and - we would not need any RAM-disk .... seems cat bytes tail
    christian

    and remember: only a CRAY can run an endless loop in just three seconds.
  • Engineering is fun! A lot of engineers are musicians funnily enough...

    The other thing that struck me with reference to the 'draft' mode idea is that its primary aim is to reduce the memory used by pre-loading the samples.
    Perhaps, if that is the case, the draft quality need only apply to the pre-loaded samples. All subsequently streamed samples would be played as they currently are, at full quality?
    The pre-load size is so small and most notes have a slow(ish) attack, that you may not notice the difference...
    The utopia here is to not have any pre-load. It is merely there, as Christian has pointed out, to circumvent the latency in your system.
    In my humble opinion, the only real barrier to having the entire cube loaded is the issue of the RAM used by the pre-load. If we can intelligently solve that problem...

  • I agree, with what little I know, it seems to me that the fact is that the best solution here is to wait a short while for technology to catch up. Spending money on software development to compensate for the lack of capable hardware will only result in that software being redundant because by the time it's ready and usable, the hardware availability will negate the need for the compensatory software development in the first place. At best there will be a 6 month to 1 year period when the software would be worthwhile having been developed and usable (and then you have to ask how many would use it?). Of course I'm guessing at technology changes over that time, but generally speaking, the investment of time and money probably wouldn't be worth it, so it's probably better to wait for the hardware to catch up. I think Mac laptops even now *could* hold 16gigs(???), if the chips were available in that size ie 2X 8gig simms. PC's I don't know about but as cm says, a 64bit OS will yield results in the hardware department - better to let MS do the work there and again the hardware will catch up quickly thereafter (straight away for towers, a bit later perhaps for laptops)

    Miklos.