Vienna Symphonic Library Forum
Forum Statistics

185,647 users have contributed to 42,400 threads and 255,553 posts.

In the past 24 hours, we have 4 new thread(s), 14 new post(s) and 48 new user(s).

  • Hello, no it´s an intel 2,6 with 4 processors and 6Gb of RAM. Thanks for the fast answer - I already was worried that I might have upset you for mentioning Eastwest. But I didn´t want to be unpolite - it´s just an honest question. Do you think the Mac is fast enogh and the Imperial tougher than to brillant but somehow smooth Boesendorfer imperial?


  • Any more demos coming soon?    I'm curious about one thing:  the demo Christian played sounds less mid-rangy than the Etude.  I prefer the more open sound of the Christian demo.  Can you tell me what the reason for the different sound would be?  Was it rendered from a different perspective than the "close-mike" perspective?  Or did he eq it a bit?  I would like to hear more demos that sound more like that demo from Christian.  This is the sound I'm wanting to hear.... but the problem with the demo from Christian is it's either VERY loud or VERY soft.  Not much in between.  


  • no problem - we ran, even the pre-release version - on a 2.5 GHz core2duo 4 GB RAM (single 2.5" disk 160 GB) ...

    the largest preset will take ~1.5 GB RAM, just place the content on a less stressed disk to allow good streaming.

    CPU is not a problem at all (with or without sympathetic resonance)

    christian

     

    ps: someone asked if the Imperial can be run on a 2GB machine ... not easily ... a very downsized XP32 2GB RAM did load the default preset immediately after boot, otherwise not enough free memory is available


    and remember: only a CRAY can run an endless loop in just three seconds.
  • last edited
    last edited

    @cm said:

    ps: someone asked if the Imperial can be run on a 2GB machine ... not easily ... a very downsized XP32 2GB RAM did load the default preset immediately after boot, otherwise not enough free memory is available

    This sounds like you are locking down the pages in memory, which is a good thing. This means once it is loaded, it will never get paged out. So I can assume that if it loads it will run.

    May I ask again at which latency you are able to run it on the recommended machine without clicks/pops?


  • last edited
    last edited

    @arne said:

    May I ask again at which latency you are able to run it on the recommended machine without clicks/pops?
     

    On modern computers you coukd set latency to 1.2 MS. (32 samples)

    Of course you have to turn off the convolution reverb at this low latency settings.

    best

    Herb 


  • last edited
    last edited

    this is a principle for sample player engines - since only the sample headers are loaded into memory the allocated space is locked - at least the engine is telling that to the os 😉 this memory space is then used for buffering data (as soon as you start to play a certain sample this buffer gets filled up from the harddisk with consecutive data)

    to which recommended machine do you refer now? here are the system requirements

    IIRC on the 2.5 GHz core2duo the Imperial has been played by default at 128 samples latency ... depending on system, audio device driver quality and settings you could go down to 64 (on rare occasions even 32) or need to increase

    hth, christian


    and remember: only a CRAY can run an endless loop in just three seconds.
  • 15k for a digital piano? If it were anything near this price the feel must surely be perfect...and even then...

  • last edited
    last edited

    @Another User said:

    to which recommended machine do you refer now? here are the system requirements

    IIRC on the 2.5 GHz core2duo the Imperial has been played by default at 128 samples latency ... depending on system, audio device driver quality and settings you could go down to 64 (on rare occasions even 32) or need to increase

    This requirements are quite modest, so I might have referred to them 😉

    64 samples is acceptable, 128 is a bit too much for my taste, it starts to feel sluggish. It is good to know that you have seen setups with 32 samples, even if it is not the common case.


  • AFAIK ivory works with 384 KB buffer per sample, whereas VI/VE works with 64KB ... doing the quick math it appears Imperial actually only with 32KB (i'll have to ask for the details) ...

    i have been told the difference between 64 and 32 can't be recognized more or less (honestly: who is able to hear 0.7 ms difference ...) whereas CPU load increases dramatically and the harddisk must be really good.

    christian


    and remember: only a CRAY can run an endless loop in just three seconds.
  • last edited
    last edited

    @julian said:

    However when VSL sample a piano for example every sample is unique. It may have the same pitch and be slightly louder than the sample before but there is no redundancy between samples of the same pitch


    Here, I think we may need to agree to differ. Given the mechanical nature of a piano, and given all other things (environment, mic positions, touch etc.) being constant, I would suspect that the sample for middle C at velocity 50 is pretty similar to that at velocity 51. Similar enough, I suspect that you could overlay both waves such that for each time sample the difference between amplitude can be expressed in 4-5bits. I suspect that the accuracy of the CEUS engine in carefully controlling the playback greatly helps here.

    Matthew

  • last edited
    last edited

    @mdovey said:

    Here, I think we may need to agree to differ. Given the mechanical nature of a piano, and given all other things (environment, mic positions, touch etc.) being constant, I would suspect that the sample for middle C at velocity 50 is pretty similar to that at velocity 51. Similar enough, I suspect that you could overlay both waves such that for each time sample the difference between amplitude can be expressed in 4-5bits. I suspect that the accuracy of the CEUS engine in carefully controlling the playback greatly helps here.

    If you interpolate data between samples it no longer is a true representation of the original sample. Also if you were to make two consecutive recordings of a piano string being hit by a hammer using the same midi velocity then tried to phase cancel these 100% it just would not happen as a piano is an analogue instrument not a digital waveform. 

    It would be disappointing to learn that after all the effort of high level sampling that what is provided as an end result is a digitally compressed "modeling" of a piano note rather than the actual recording reproduced faithfully using all the original data (i.e. lossless).

    Of course only VSL are in a position to compare the original with the compressed to understand the compromises involved. But I'm sure most composers or music mixers would question having their final mixes subjected to 10-1 compression outside of the squashing that occurs for iTunes/mp3s.

    Julian


  • you really need to distinguish between lossy (AAC, mp3, ect) and lossless compression (rar, zip, ect) ...

    usually rar is much better than zip ... though i can show you examples (of files) which compress to 1% of the original size and others which you don't get below 99% ... depends always on what is in the file and which algorithm is used for compression.

     

    actually PCM is already some sort of (lossless) compression .... christian


    and remember: only a CRAY can run an endless loop in just three seconds.
  • Hi Christian,

    I've never known an audio file zip save more than about 10% (90% of the original size) except when there has been silence in the file, whereas lossy compression can be extremely effective but with the downside of compromising the original quality.

    It is because of exactly this that I am most interested in the rational behing VSL (purveyors of quality!) choosing a 10-1 lossy algorithm.... or has someone discovered a holy grail!

    Julian


  • the secret besides the algorithm itself is the used dictionary matching (see my post above) the particulary data.

    it should work with a similar efficiency on harps and acoustic guitar - it will never work with brass.

    dig out your old exs install DVDs and see how good some instruments compressed whereas some do not ...

    christian


    and remember: only a CRAY can run an endless loop in just three seconds.
  • last edited
    last edited

    @julian said:

    I've never known an audio file zip save more than about 10% (90% of the original size) except when there has been silence in the file, whereas lossy compression can be extremely effective but with the downside of compromising the original quality.

    Suppose you had 100 different audio files, each one being a different pianist playing the same piano piece. Each audio file will be unique because two pianists would play the same piece in precisely the same way, and each individual audio file would probably compress to a zip about 90% of the original size.

    However, suppose you took all 100 audio files and put them in a single zip or linked all the audio files into a large single audio of all 100 performances and zipped that. My suspicion is that because of the similarities between each audio file, you would achieve a better compression (maybe worth an empirical experiment).

    Matthew

  • last edited
    last edited

    @julian said:

    If you interpolate data between samples it no longer is a true representation of the original sample. Also if you were to make two consecutive recordings of a piano string being hit by a hammer using the same midi velocity then tried to phase cancel these 100% it just would not happen as a piano is an analogue instrument not a digital waveform.
    I wasn't suggesting interpolating data.

    However, taking your cancellation example. I am willing to believe that the CEUS is sophistcated and accurate enough that if you programmed it to play the same note in the same way in the same acoustic environment and then tried to phase cancel these, you could achieve in the region of 90% cancellation.

    I am also willing to believe that if you got the CEUS to play the same note in the same way but just one velocity layer apart (given that we are talking about 100 velocity layers overall), then and attempted to phase cancel those waves out you could achieve close to 90% cancellation.

    And it is basically those sorts of properties that this sample set lends itself to a high compression ratio, regardless of the compression algorithm used (although I'm happy to believe that VSLs algorithm is optimised for this sort of data).

    A good benchmark if VSL is willing to try it, would be for someone in VSL to take the full 500GB uncompressed data and put it in a big zip file, and then let us know what compression that achieves. It may not be as good as the 10:1 ratio (and a bit embarrasing for VSL it it turns out better!!), but my suspicion is that it will not be too far off (8:1 - 9:1).

    Matthew

  • last edited
    last edited

    @mdovey said:

    [quote=julian]I've never known an audio file zip save more than about 10% (90% of the original size) except when there has been silence in the file, whereas lossy compression can be extremely effective but with the downside of compromising the original quality.

    Computers do not have a subjective button. There would be absolutely no file saving by zipping 100 performances of the same piece by different pianists! compared with 100 totally different pieces. CM will, I'm sure, confirm this!

    Here's an analogy: You want to build a house from handmade bricks. It takes 10,000 bricks to build the house but you don't want to transport 10,000 bricks so you transport 1000 bricks and make 9,000 machine reconstructed bricks based on the 1000 originals. The house created from the machine bricks may look the same (for 99% of viewers) as the house built completely from totally handmade bricks BUT for the purist the machine reconstructed brick house is not original handmade materials.

    Now what's the Vienna Imperial compared with the original instrument?!

    Julian

    Julian


  • Good Lord, this thread is nuts.  Very much looking forward to more demos.


  • last edited
    last edited

    @mdovey said:

    [quote=julian]If you interpolate data between samples it no longer is a true representation of the original sample. Also if you were to make two consecutive recordings of a piano string being hit by a hammer using the same midi velocity then tried to phase cancel these 100% it just would not happen as a piano is an analogue instrument not a digital waveform.
    I wasn't suggesting interpolating data.

    You will not get cancelation as there are too many external variables (the reflections of the room, the felt on the hammers, the ribbons in the microphone, even, if you get to minute levels, the temperature of the string when it is struck a second time) all these do not conform to midi 1-127 level differences they have almost infinite variations in reaction.

    So you will not get cancellation so they are not the same it's either the same or different. There wouldn't be such a thing as 90% of shared wavelengths at bit level. If you want to interpolate then yes algorithms will achieve this to a lesser or greater extent but it is not the same recording - it is a re-construction.

    Julian


  • just one thing to say:

    playing acoustic music on loudspeakers always means re-construction: soundwaves to electrical current in the mic, plus maybe digitalizing and decoding to analogue, then reconstructing the soundwaves my moving the loudspeaker membrane.

    that´s the nature of it. you never listen to the original, but to an electro-acoustic (plus maybe digital) reconstruction of the original.

    if that reconstruction sounds good, it´s good. if vsl data compression sounds good, it´s good.

    but here is the solution: if vienna imperial does not sound "original" enough, we just have to buy a boesendorfer ceus imperial. 

    i tested it, it´s really really marvellous. 130k and you´re done.