@Sakamoto said:
will my 2,6 Ghz Quad Mac from 2006 be strong enough
this is a G5 - right? sorry, intel only. christian
and remember: only a CRAY can run an endless loop in just three seconds.
200,804 users have contributed to 43,212 threads and 259,133 posts.
In the past 24 hours, we have 2 new thread(s), 4 new post(s) and 49 new user(s).
Hello, no it´s an intel 2,6 with 4 processors and 6Gb of RAM. Thanks for the fast answer - I already was worried that I might have upset you for mentioning Eastwest. But I didn´t want to be unpolite - it´s just an honest question. Do you think the Mac is fast enogh and the Imperial tougher than to brillant but somehow smooth Boesendorfer imperial?
Any more demos coming soon? I'm curious about one thing: the demo Christian played sounds less mid-rangy than the Etude. I prefer the more open sound of the Christian demo. Can you tell me what the reason for the different sound would be? Was it rendered from a different perspective than the "close-mike" perspective? Or did he eq it a bit? I would like to hear more demos that sound more like that demo from Christian. This is the sound I'm wanting to hear.... but the problem with the demo from Christian is it's either VERY loud or VERY soft. Not much in between.
no problem - we ran, even the pre-release version - on a 2.5 GHz core2duo 4 GB RAM (single 2.5" disk 160 GB) ...
the largest preset will take ~1.5 GB RAM, just place the content on a less stressed disk to allow good streaming.
CPU is not a problem at all (with or without sympathetic resonance)
christian
ps: someone asked if the Imperial can be run on a 2GB machine ... not easily ... a very downsized XP32 2GB RAM did load the default preset immediately after boot, otherwise not enough free memory is available
@cm said:
ps: someone asked if the Imperial can be run on a 2GB machine ... not easily ... a very downsized XP32 2GB RAM did load the default preset immediately after boot, otherwise not enough free memory is available
This sounds like you are locking down the pages in memory, which is a good thing. This means once it is loaded, it will never get paged out. So I can assume that if it loads it will run.
May I ask again at which latency you are able to run it on the recommended machine without clicks/pops?
@arne said:
May I ask again at which latency you are able to run it on the recommended machine without clicks/pops?
On modern computers you coukd set latency to 1.2 MS. (32 samples)
Of course you have to turn off the convolution reverb at this low latency settings.
best
Herb
this is a principle for sample player engines - since only the sample headers are loaded into memory the allocated space is locked - at least the engine is telling that to the os 😉 this memory space is then used for buffering data (as soon as you start to play a certain sample this buffer gets filled up from the harddisk with consecutive data)
to which recommended machine do you refer now? here are the system requirements
IIRC on the 2.5 GHz core2duo the Imperial has been played by default at 128 samples latency ... depending on system, audio device driver quality and settings you could go down to 64 (on rare occasions even 32) or need to increase
hth, christian
to which recommended machine do you refer now? here are the system requirements
IIRC on the 2.5 GHz core2duo the Imperial has been played by default at 128 samples latency ... depending on system, audio device driver quality and settings you could go down to 64 (on rare occasions even 32) or need to increase
This requirements are quite modest, so I might have referred to them 😉
64 samples is acceptable, 128 is a bit too much for my taste, it starts to feel sluggish. It is good to know that you have seen setups with 32 samples, even if it is not the common case.
AFAIK ivory works with 384 KB buffer per sample, whereas VI/VE works with 64KB ... doing the quick math it appears Imperial actually only with 32KB (i'll have to ask for the details) ...
i have been told the difference between 64 and 32 can't be recognized more or less (honestly: who is able to hear 0.7 ms difference ...) whereas CPU load increases dramatically and the harddisk must be really good.
christian
@julian said:
However when VSL sample a piano for example every sample is unique. It may have the same pitch and be slightly louder than the sample before but there is no redundancy between samples of the same pitch
@mdovey said:
Here, I think we may need to agree to differ. Given the mechanical nature of a piano, and given all other things (environment, mic positions, touch etc.) being constant, I would suspect that the sample for middle C at velocity 50 is pretty similar to that at velocity 51. Similar enough, I suspect that you could overlay both waves such that for each time sample the difference between amplitude can be expressed in 4-5bits. I suspect that the accuracy of the CEUS engine in carefully controlling the playback greatly helps here.
If you interpolate data between samples it no longer is a true representation of the original sample. Also if you were to make two consecutive recordings of a piano string being hit by a hammer using the same midi velocity then tried to phase cancel these 100% it just would not happen as a piano is an analogue instrument not a digital waveform.
It would be disappointing to learn that after all the effort of high level sampling that what is provided as an end result is a digitally compressed "modeling" of a piano note rather than the actual recording reproduced faithfully using all the original data (i.e. lossless).
Of course only VSL are in a position to compare the original with the compressed to understand the compromises involved. But I'm sure most composers or music mixers would question having their final mixes subjected to 10-1 compression outside of the squashing that occurs for iTunes/mp3s.
Julian
you really need to distinguish between lossy (AAC, mp3, ect) and lossless compression (rar, zip, ect) ...
usually rar is much better than zip ... though i can show you examples (of files) which compress to 1% of the original size and others which you don't get below 99% ... depends always on what is in the file and which algorithm is used for compression.
actually PCM is already some sort of (lossless) compression .... christian
Hi Christian,
I've never known an audio file zip save more than about 10% (90% of the original size) except when there has been silence in the file, whereas lossy compression can be extremely effective but with the downside of compromising the original quality.
It is because of exactly this that I am most interested in the rational behing VSL (purveyors of quality!) choosing a 10-1 lossy algorithm.... or has someone discovered a holy grail!
Julian
the secret besides the algorithm itself is the used dictionary matching (see my post above) the particulary data.
it should work with a similar efficiency on harps and acoustic guitar - it will never work with brass.
dig out your old exs install DVDs and see how good some instruments compressed whereas some do not ...
christian
@julian said:
I've never known an audio file zip save more than about 10% (90% of the original size) except when there has been silence in the file, whereas lossy compression can be extremely effective but with the downside of compromising the original quality.
I wasn't suggesting interpolating data.@julian said:
If you interpolate data between samples it no longer is a true representation of the original sample. Also if you were to make two consecutive recordings of a piano string being hit by a hammer using the same midi velocity then tried to phase cancel these 100% it just would not happen as a piano is an analogue instrument not a digital waveform.
@mdovey said:
[quote=julian]I've never known an audio file zip save more than about 10% (90% of the original size) except when there has been silence in the file, whereas lossy compression can be extremely effective but with the downside of compromising the original quality.
Computers do not have a subjective button. There would be absolutely no file saving by zipping 100 performances of the same piece by different pianists! compared with 100 totally different pieces. CM will, I'm sure, confirm this!
Here's an analogy: You want to build a house from handmade bricks. It takes 10,000 bricks to build the house but you don't want to transport 10,000 bricks so you transport 1000 bricks and make 9,000 machine reconstructed bricks based on the 1000 originals. The house created from the machine bricks may look the same (for 99% of viewers) as the house built completely from totally handmade bricks BUT for the purist the machine reconstructed brick house is not original handmade materials.
Now what's the Vienna Imperial compared with the original instrument?!
Julian
Julian
I wasn't suggesting interpolating data.@mdovey said:
[quote=julian]If you interpolate data between samples it no longer is a true representation of the original sample. Also if you were to make two consecutive recordings of a piano string being hit by a hammer using the same midi velocity then tried to phase cancel these 100% it just would not happen as a piano is an analogue instrument not a digital waveform.
You will not get cancelation as there are too many external variables (the reflections of the room, the felt on the hammers, the ribbons in the microphone, even, if you get to minute levels, the temperature of the string when it is struck a second time) all these do not conform to midi 1-127 level differences they have almost infinite variations in reaction.
So you will not get cancellation so they are not the same it's either the same or different. There wouldn't be such a thing as 90% of shared wavelengths at bit level. If you want to interpolate then yes algorithms will achieve this to a lesser or greater extent but it is not the same recording - it is a re-construction.
Julian