Vienna Symphonic Library Forum
Forum Statistics

185,546 users have contributed to 42,395 threads and 255,527 posts.

In the past 24 hours, we have 0 new thread(s), 12 new post(s) and 50 new user(s).

  • Thanks for the reply Clemens. I look forward to something new coming our way.

    I haven't seen anything for a long time that has appealed. Joysticks, modwheels on the side. Dodgy software/firmware. How difficult is it to produce a decent MK at a reasonable price?

    Colin


  • it depends on what price seems reasonable to you. a good grand piano mechanic is not cheap. i would expect the price for something like a vienna grand around 15k. 


  • it may be useful to have a look at the yamaha range of products. 

    vienna grand will be much cooler, but i´m quite satisfied with my yus-5. good masterkeyboard, and a really really good piano as well.


  • last edited
    last edited

    @clemenshaas said:

    it depends on what price seems reasonable to you. a good grand piano mechanic is not cheap. i would expect the price for something like a vienna grand around 15k. 

    Sounds reasonable to me.  [:O]

    I mean, we are talking 15k if the Italians opt out of the Euro and reintroduce the lire. Yes?

    Colin


  • last edited
    last edited

    @julian said:

    However I'm still interested in the 10-1 VSL compression claim - no one else, to my knowledge, has got beyond 2-1 without affecting data integrity i.e. not lossless.

    I suspect that the high compression ratio is due to the (highly repetitive) nature of the data being compressed.

    Consider how (very) low quality sample libraries work (or the early ones) - a library might just have one sample for a pitch sampled at velocity 50. To play the same note at velocity 100 it would just double the amplitude of that sample, which although far from perfect is a reasonable approximation. So for compression purposes, if you do have a sample for velocity 100 what you could store rather than that sample itself is the difference between the velocity 100 sample and the velocity 50 sample with the amplitude doubled.

    Now whilst that may not achieve a 10-1 compression, consider the difference between a velocity 51 sample and the velocity 50 sample (with an appropriate increase in amplitude) - the differences here would be pretty small (possibly larger than 10-1).

    Similar compression can be achieved for pitch - instead of storing the entire c2 sample, store the difference between the c2 sample and the c1 sample with the speed doubled.

    Now, I don't know if the VSL compression techniques are based on the above - but the above reasoning is enough to persuade me that it is plausible that there are characteristics in the sort of data needed in a sample library which can be taken advantage of to achieve higher compression ratios than are normally possible in generic more data sets.

    Matthew


  • last edited
    last edited

    @clemenshaas said:

    it may be useful to have a look at the yamaha range of products. 

    vienna grand will be much cooler, but i´m quite satisfied with my yus-5. good masterkeyboard, and a really really good piano as well.

    Cheers. Will check it out tomorrow and hopefully won't require any further medical assistance as a result of your last post.

    Colin


  • last edited
    last edited

    @ct1961 said:

    Is this just wishful thinking or have I missed an announcement somewhere of VSL's intent to produce a master keyboard?

    I know they had something at NAMM but I was under the impression that was a one-off proof of concept intended more as a publicity aid? The various reports from NAMM decribe it as "Vienna Orchestral Piano prototype, Vienna's first hardware concept study"

    If VSL do produce a midi keyboard, I hope it is a little more like a conventional midi keyboard in size and shape, and less like a 7 foot grand piano in size and shape, since I don't have room for the latter!

    Matthew


  • last edited
    last edited

    @mdovey said:

    If VSL do produce a midi keyboard, I hope it is a little more like a conventional midi keyboard in size and shape, and less like a 7 foot grand piano in size and shape, since I don't have room for the latter!

    Hopefully you're right Matthew. A 7 feet grand piano sounds a bit over the top.

    My Steinway's only got 3.

    Colin


  • last edited
    last edited

    @mdovey said:

    If VSL do produce a midi keyboard, I hope it is a little more like a conventional midi keyboard in size and shape, and less like a 7 foot grand piano in size and shape, since I don't have room for the latter!

    Hopefully you're right Matthew. A 7 feet grand piano sounds a bit over the top.

    My Steinway's only got 3.

    Colin

    this is what the boesendorfer "original" prototype looks like.

    http://oesterreich.orf.at/wien/stories/178007/


  • last edited
    last edited

    @julian said:

    However I'm still interested in the 10-1 VSL compression claim - no one else, to my knowledge, has got beyond 2-1 without affecting data integrity i.e. not lossless.

    I suspect that the high compression ratio is due to the (highly repetitive) nature of the data being compressed.

    Consider how (very) low quality sample libraries work (or the early ones) - a library might just have one sample for a pitch sampled at velocity 50. To play the same note at velocity 100 it would just double the amplitude of that sample, which although far from perfect is a reasonable approximation. So for compression purposes, if you do have a sample for velocity 100 what you could store rather than that sample itself is the difference between the velocity 100 sample and the velocity 50 sample with the amplitude doubled.

    Now whilst that may not achieve a 10-1 compression, consider the difference between a velocity 51 sample and the velocity 50 sample (with an appropriate increase in amplitude) - the differences here would be pretty small (possibly larger than 10-1).

    Similar compression can be achieved for pitch - instead of storing the entire c2 sample, store the difference between the c2 sample and the c1 sample with the speed doubled.

    Now, I don't know if the VSL compression techniques are based on the above - but the above reasoning is enough to persuade me that it is plausible that there are characteristics in the sort of data needed in a sample library which can be taken advantage of to achieve higher compression ratios than are normally possible in generic more data sets.

    Matthew

    With many video compression techniques for each consecutive frame only the differences are sent as new data until the next keyframe (full data requirement) is sent. This allows for significant data reduction, though when the picture image changes rapidly like in a fast moving sequence there is less redundancy frame for frame and artefacts soon become apparent.

    However when VSL sample a piano for example every sample is unique. It may have the same pitch and be slightly louder than the sample before but there is no redundancy between samples of the same pitch. Each and every sample is unique and cannot share data (however compressed between another sample). So VSL makes the original recording and then edits and produces the final uncompressed samples in 24bit/44.1k for current delivery.

    The quoted data size for the complete set of samples is advertised at 500GB - a rough calculation using the bit rate of 2116 bps or 264.6 Bps for a 24 bit stereo 44.1k sample gives us a total recorded duration of, amazingly, 525 hours if all the samples were played back to back in their entirety.

    So by reducing the data from 500GB to 50 GB VSL are squeezing 525 hours of recordings into the space normally occupied by 52.5 hours. Now we all know mp3 encoding achieves this all the time but I do not understand how the piano samples when played from the Vienna Imperial software engine can be the same quality as the original when it is re-created from only 10% of the original data.

    I do hope someone from VSL will expand on how the seemingly impossible is being achieved!

    Thanks

    Julian


  • last edited
    last edited

    @julian said:

    no one else, to my knowledge, has got beyond 2-1 without affecting data integrity

    allow me to step in ... whereas we received interesting compression ratios with a few products (especially harps and acoustic guitar ~1:3) i have to concede i couldn't believe it when i first saw the result. but be aware: this is a very special situation not only related to the wide spreading of velocity layers, but more to the particulary character of the raw wave data and the used compression algorithm - i doubt this would be reproducable with any other instrument than a piano.

     

    while i'm here: the very special with the Vienna Imperial is that - to my knowledge *) - the first time it has been possible to keep such compressed samples in memory and uncompress them on the fly in the new engine while streaming, congrats to george for this masterpiece in software.

    christian

     

    *) on-the-fly uncompressing for legacy compression rates of 3:2 have been archived earlier with GigaStudio 3 and the regular VI/VE engine


    and remember: only a CRAY can run an endless loop in just three seconds.
  • I would like to say that this is a wonderful performance by Jay on the new piano.

    I really enjoyed listening to this Jay.

    many thanks,

    Steve[:D]


  • Hello people,

    I admire VSL for the products and the great service but at the moment I am wondering if the "Vienna Imperial" is suitable for pop-arrangements. My problem is that all sample-based pianos seem to be to weak for that purpose and to far away, wet and smooth in their recording. Will that be better than in the case of the "Boesendorfer imperial" with the close microphone positioning? To be honest I thought about getting the Eastwest Piano with its Yamaha-piano - but 270Gb seem to be a kind of a show-off and no guarantee for real quality - and VSL is much more sympathic to me. And a last question: will my 2,6 Ghz Quad Mac from 2006 be strong enough for both librararies?


  • last edited
    last edited

    @Sakamoto said:

    will my 2,6 Ghz Quad Mac from 2006 be strong enough

    this is a G5 - right? sorry, intel only. christian


    and remember: only a CRAY can run an endless loop in just three seconds.
  • Hello, no it´s an intel 2,6 with 4 processors and 6Gb of RAM. Thanks for the fast answer - I already was worried that I might have upset you for mentioning Eastwest. But I didn´t want to be unpolite - it´s just an honest question. Do you think the Mac is fast enogh and the Imperial tougher than to brillant but somehow smooth Boesendorfer imperial?


  • Any more demos coming soon?    I'm curious about one thing:  the demo Christian played sounds less mid-rangy than the Etude.  I prefer the more open sound of the Christian demo.  Can you tell me what the reason for the different sound would be?  Was it rendered from a different perspective than the "close-mike" perspective?  Or did he eq it a bit?  I would like to hear more demos that sound more like that demo from Christian.  This is the sound I'm wanting to hear.... but the problem with the demo from Christian is it's either VERY loud or VERY soft.  Not much in between.  


  • no problem - we ran, even the pre-release version - on a 2.5 GHz core2duo 4 GB RAM (single 2.5" disk 160 GB) ...

    the largest preset will take ~1.5 GB RAM, just place the content on a less stressed disk to allow good streaming.

    CPU is not a problem at all (with or without sympathetic resonance)

    christian

     

    ps: someone asked if the Imperial can be run on a 2GB machine ... not easily ... a very downsized XP32 2GB RAM did load the default preset immediately after boot, otherwise not enough free memory is available


    and remember: only a CRAY can run an endless loop in just three seconds.
  • last edited
    last edited

    @cm said:

    ps: someone asked if the Imperial can be run on a 2GB machine ... not easily ... a very downsized XP32 2GB RAM did load the default preset immediately after boot, otherwise not enough free memory is available

    This sounds like you are locking down the pages in memory, which is a good thing. This means once it is loaded, it will never get paged out. So I can assume that if it loads it will run.

    May I ask again at which latency you are able to run it on the recommended machine without clicks/pops?


  • last edited
    last edited

    @arne said:

    May I ask again at which latency you are able to run it on the recommended machine without clicks/pops?
     

    On modern computers you coukd set latency to 1.2 MS. (32 samples)

    Of course you have to turn off the convolution reverb at this low latency settings.

    best

    Herb 


  • last edited
    last edited

    this is a principle for sample player engines - since only the sample headers are loaded into memory the allocated space is locked - at least the engine is telling that to the os 😉 this memory space is then used for buffering data (as soon as you start to play a certain sample this buffer gets filled up from the harddisk with consecutive data)

    to which recommended machine do you refer now? here are the system requirements

    IIRC on the 2.5 GHz core2duo the Imperial has been played by default at 128 samples latency ... depending on system, audio device driver quality and settings you could go down to 64 (on rare occasions even 32) or need to increase

    hth, christian


    and remember: only a CRAY can run an endless loop in just three seconds.