Vienna Symphonic Library Forum
Forum Statistics

185,546 users have contributed to 42,395 threads and 255,527 posts.

In the past 24 hours, we have 0 new thread(s), 12 new post(s) and 48 new user(s).

  • last edited
    last edited

    @mdovey said:

    If VSL do produce a midi keyboard, I hope it is a little more like a conventional midi keyboard in size and shape, and less like a 7 foot grand piano in size and shape, since I don't have room for the latter!

    Hopefully you're right Matthew. A 7 feet grand piano sounds a bit over the top.

    My Steinway's only got 3.

    Colin


  • last edited
    last edited

    @mdovey said:

    If VSL do produce a midi keyboard, I hope it is a little more like a conventional midi keyboard in size and shape, and less like a 7 foot grand piano in size and shape, since I don't have room for the latter!

    Hopefully you're right Matthew. A 7 feet grand piano sounds a bit over the top.

    My Steinway's only got 3.

    Colin

    this is what the boesendorfer "original" prototype looks like.

    http://oesterreich.orf.at/wien/stories/178007/


  • last edited
    last edited

    @julian said:

    However I'm still interested in the 10-1 VSL compression claim - no one else, to my knowledge, has got beyond 2-1 without affecting data integrity i.e. not lossless.

    I suspect that the high compression ratio is due to the (highly repetitive) nature of the data being compressed.

    Consider how (very) low quality sample libraries work (or the early ones) - a library might just have one sample for a pitch sampled at velocity 50. To play the same note at velocity 100 it would just double the amplitude of that sample, which although far from perfect is a reasonable approximation. So for compression purposes, if you do have a sample for velocity 100 what you could store rather than that sample itself is the difference between the velocity 100 sample and the velocity 50 sample with the amplitude doubled.

    Now whilst that may not achieve a 10-1 compression, consider the difference between a velocity 51 sample and the velocity 50 sample (with an appropriate increase in amplitude) - the differences here would be pretty small (possibly larger than 10-1).

    Similar compression can be achieved for pitch - instead of storing the entire c2 sample, store the difference between the c2 sample and the c1 sample with the speed doubled.

    Now, I don't know if the VSL compression techniques are based on the above - but the above reasoning is enough to persuade me that it is plausible that there are characteristics in the sort of data needed in a sample library which can be taken advantage of to achieve higher compression ratios than are normally possible in generic more data sets.

    Matthew

    With many video compression techniques for each consecutive frame only the differences are sent as new data until the next keyframe (full data requirement) is sent. This allows for significant data reduction, though when the picture image changes rapidly like in a fast moving sequence there is less redundancy frame for frame and artefacts soon become apparent.

    However when VSL sample a piano for example every sample is unique. It may have the same pitch and be slightly louder than the sample before but there is no redundancy between samples of the same pitch. Each and every sample is unique and cannot share data (however compressed between another sample). So VSL makes the original recording and then edits and produces the final uncompressed samples in 24bit/44.1k for current delivery.

    The quoted data size for the complete set of samples is advertised at 500GB - a rough calculation using the bit rate of 2116 bps or 264.6 Bps for a 24 bit stereo 44.1k sample gives us a total recorded duration of, amazingly, 525 hours if all the samples were played back to back in their entirety.

    So by reducing the data from 500GB to 50 GB VSL are squeezing 525 hours of recordings into the space normally occupied by 52.5 hours. Now we all know mp3 encoding achieves this all the time but I do not understand how the piano samples when played from the Vienna Imperial software engine can be the same quality as the original when it is re-created from only 10% of the original data.

    I do hope someone from VSL will expand on how the seemingly impossible is being achieved!

    Thanks

    Julian


  • last edited
    last edited

    @julian said:

    no one else, to my knowledge, has got beyond 2-1 without affecting data integrity

    allow me to step in ... whereas we received interesting compression ratios with a few products (especially harps and acoustic guitar ~1:3) i have to concede i couldn't believe it when i first saw the result. but be aware: this is a very special situation not only related to the wide spreading of velocity layers, but more to the particulary character of the raw wave data and the used compression algorithm - i doubt this would be reproducable with any other instrument than a piano.

     

    while i'm here: the very special with the Vienna Imperial is that - to my knowledge *) - the first time it has been possible to keep such compressed samples in memory and uncompress them on the fly in the new engine while streaming, congrats to george for this masterpiece in software.

    christian

     

    *) on-the-fly uncompressing for legacy compression rates of 3:2 have been archived earlier with GigaStudio 3 and the regular VI/VE engine


    and remember: only a CRAY can run an endless loop in just three seconds.
  • I would like to say that this is a wonderful performance by Jay on the new piano.

    I really enjoyed listening to this Jay.

    many thanks,

    Steve[:D]


  • Hello people,

    I admire VSL for the products and the great service but at the moment I am wondering if the "Vienna Imperial" is suitable for pop-arrangements. My problem is that all sample-based pianos seem to be to weak for that purpose and to far away, wet and smooth in their recording. Will that be better than in the case of the "Boesendorfer imperial" with the close microphone positioning? To be honest I thought about getting the Eastwest Piano with its Yamaha-piano - but 270Gb seem to be a kind of a show-off and no guarantee for real quality - and VSL is much more sympathic to me. And a last question: will my 2,6 Ghz Quad Mac from 2006 be strong enough for both librararies?


  • last edited
    last edited

    @Sakamoto said:

    will my 2,6 Ghz Quad Mac from 2006 be strong enough

    this is a G5 - right? sorry, intel only. christian


    and remember: only a CRAY can run an endless loop in just three seconds.
  • Hello, no it´s an intel 2,6 with 4 processors and 6Gb of RAM. Thanks for the fast answer - I already was worried that I might have upset you for mentioning Eastwest. But I didn´t want to be unpolite - it´s just an honest question. Do you think the Mac is fast enogh and the Imperial tougher than to brillant but somehow smooth Boesendorfer imperial?


  • Any more demos coming soon?    I'm curious about one thing:  the demo Christian played sounds less mid-rangy than the Etude.  I prefer the more open sound of the Christian demo.  Can you tell me what the reason for the different sound would be?  Was it rendered from a different perspective than the "close-mike" perspective?  Or did he eq it a bit?  I would like to hear more demos that sound more like that demo from Christian.  This is the sound I'm wanting to hear.... but the problem with the demo from Christian is it's either VERY loud or VERY soft.  Not much in between.  


  • no problem - we ran, even the pre-release version - on a 2.5 GHz core2duo 4 GB RAM (single 2.5" disk 160 GB) ...

    the largest preset will take ~1.5 GB RAM, just place the content on a less stressed disk to allow good streaming.

    CPU is not a problem at all (with or without sympathetic resonance)

    christian

     

    ps: someone asked if the Imperial can be run on a 2GB machine ... not easily ... a very downsized XP32 2GB RAM did load the default preset immediately after boot, otherwise not enough free memory is available


    and remember: only a CRAY can run an endless loop in just three seconds.
  • last edited
    last edited

    @cm said:

    ps: someone asked if the Imperial can be run on a 2GB machine ... not easily ... a very downsized XP32 2GB RAM did load the default preset immediately after boot, otherwise not enough free memory is available

    This sounds like you are locking down the pages in memory, which is a good thing. This means once it is loaded, it will never get paged out. So I can assume that if it loads it will run.

    May I ask again at which latency you are able to run it on the recommended machine without clicks/pops?


  • last edited
    last edited

    @arne said:

    May I ask again at which latency you are able to run it on the recommended machine without clicks/pops?
     

    On modern computers you coukd set latency to 1.2 MS. (32 samples)

    Of course you have to turn off the convolution reverb at this low latency settings.

    best

    Herb 


  • last edited
    last edited

    this is a principle for sample player engines - since only the sample headers are loaded into memory the allocated space is locked - at least the engine is telling that to the os 😉 this memory space is then used for buffering data (as soon as you start to play a certain sample this buffer gets filled up from the harddisk with consecutive data)

    to which recommended machine do you refer now? here are the system requirements

    IIRC on the 2.5 GHz core2duo the Imperial has been played by default at 128 samples latency ... depending on system, audio device driver quality and settings you could go down to 64 (on rare occasions even 32) or need to increase

    hth, christian


    and remember: only a CRAY can run an endless loop in just three seconds.
  • 15k for a digital piano? If it were anything near this price the feel must surely be perfect...and even then...

  • last edited
    last edited

    @Another User said:

    to which recommended machine do you refer now? here are the system requirements

    IIRC on the 2.5 GHz core2duo the Imperial has been played by default at 128 samples latency ... depending on system, audio device driver quality and settings you could go down to 64 (on rare occasions even 32) or need to increase

    This requirements are quite modest, so I might have referred to them 😉

    64 samples is acceptable, 128 is a bit too much for my taste, it starts to feel sluggish. It is good to know that you have seen setups with 32 samples, even if it is not the common case.


  • AFAIK ivory works with 384 KB buffer per sample, whereas VI/VE works with 64KB ... doing the quick math it appears Imperial actually only with 32KB (i'll have to ask for the details) ...

    i have been told the difference between 64 and 32 can't be recognized more or less (honestly: who is able to hear 0.7 ms difference ...) whereas CPU load increases dramatically and the harddisk must be really good.

    christian


    and remember: only a CRAY can run an endless loop in just three seconds.
  • last edited
    last edited

    @julian said:

    However when VSL sample a piano for example every sample is unique. It may have the same pitch and be slightly louder than the sample before but there is no redundancy between samples of the same pitch


    Here, I think we may need to agree to differ. Given the mechanical nature of a piano, and given all other things (environment, mic positions, touch etc.) being constant, I would suspect that the sample for middle C at velocity 50 is pretty similar to that at velocity 51. Similar enough, I suspect that you could overlay both waves such that for each time sample the difference between amplitude can be expressed in 4-5bits. I suspect that the accuracy of the CEUS engine in carefully controlling the playback greatly helps here.

    Matthew

  • last edited
    last edited

    @mdovey said:

    Here, I think we may need to agree to differ. Given the mechanical nature of a piano, and given all other things (environment, mic positions, touch etc.) being constant, I would suspect that the sample for middle C at velocity 50 is pretty similar to that at velocity 51. Similar enough, I suspect that you could overlay both waves such that for each time sample the difference between amplitude can be expressed in 4-5bits. I suspect that the accuracy of the CEUS engine in carefully controlling the playback greatly helps here.

    If you interpolate data between samples it no longer is a true representation of the original sample. Also if you were to make two consecutive recordings of a piano string being hit by a hammer using the same midi velocity then tried to phase cancel these 100% it just would not happen as a piano is an analogue instrument not a digital waveform. 

    It would be disappointing to learn that after all the effort of high level sampling that what is provided as an end result is a digitally compressed "modeling" of a piano note rather than the actual recording reproduced faithfully using all the original data (i.e. lossless).

    Of course only VSL are in a position to compare the original with the compressed to understand the compromises involved. But I'm sure most composers or music mixers would question having their final mixes subjected to 10-1 compression outside of the squashing that occurs for iTunes/mp3s.

    Julian


  • you really need to distinguish between lossy (AAC, mp3, ect) and lossless compression (rar, zip, ect) ...

    usually rar is much better than zip ... though i can show you examples (of files) which compress to 1% of the original size and others which you don't get below 99% ... depends always on what is in the file and which algorithm is used for compression.

     

    actually PCM is already some sort of (lossless) compression .... christian


    and remember: only a CRAY can run an endless loop in just three seconds.
  • Hi Christian,

    I've never known an audio file zip save more than about 10% (90% of the original size) except when there has been silence in the file, whereas lossy compression can be extremely effective but with the downside of compromising the original quality.

    It is because of exactly this that I am most interested in the rational behing VSL (purveyors of quality!) choosing a 10-1 lossy algorithm.... or has someone discovered a holy grail!

    Julian