Thanks for the replies Herb. I guess that I am just disappointed that the player clips; that's all. [:(]
DG-
why do you think it _shouldn't_ ???
we live in a finite world with actually everthing having a beginning and an ending.
Did you ever spill over your morning coffee cup because you did not stop pouring in more and more?
What will happen if 3 people start pouring coffee into your cup??
your cup is a container like a 32bit float memory chunk.
it can't spill over, tho - it clips !
chrisT
-
The reason that I think it shouldn't is because other plugs don't, and my DAW doesn't, except with VI.
I'm just trying to find a workflow that is as near to real world, when I program, as I can. With real orchestration, for example, there is no way that 2 flutes playing is twice as loud as 1 flute, so by reducing the volume in the VI player, I am having to do something which is unnatural. Then I have to raise it again when there is only one flute playing. I would rather do all these volume changes as part of the mixing process, than when programming MIDI, otherwise it feels like I have to do the job twice.
It is no huge deal; I was just interested to know why the VI player is different, in this regard, than other plugs that I work with.
DG
-
@DG said:
The reason that I think it shouldn't is because other plugs don't, and my DAW doesn't, except with VI.i can hardly believe this. I am working for 20 years now with DAW's and integrated synths and samplers, and clipping was always an issue as opposed to analog gear where there was a certain headroom which caused distortion. Which was sometimes appealing. Tubes could be driven into the red zone as well as tapes.
Why does clipping occur? If 16 bit integers are being used for encoding you will have a max value of
2 exp 15 which is 0111 1111 1111 1111 in binary digits. Since waves alter the rest goes to the negativ numbers. This is 2's complement encoding. 1111 1111 1111 1111 is the maximum value a 16 bit integer can hold. So what happens if the signal keeps "overshooting"? It bangs its head at the limit. And if a wave does not behave like a wave, ie goingup and down, the 16 bit bucket is full most of the time where it rather should pull back, resulting in a straight line at max value. This causes a very annoying distortion when being brought back to analog domain.
Now with 32 bit float. Does apply to 64 bit doubles or 128 bit long doubles as well.
Internally digital synths operate within these boundaries -1.0 +1.0 . Benefit of float encoding is that errors do accumulate more slowly thus achieving greater precision and better sound in the end. But the limits are still there: -1.0 and 1.0. Exceeding this for considerable time results in clipping of the signal.
You've made an interesting remark that in "real life" adding another flute does not double the volume.
Where? When two flute players meet in the woods for a jam you would hardly hear the second joining in if they play in unison. But let them meet in your bath room !
You see, the container is always an issue !
All our samples are optimized. They come in full range. Technically there is no difference between a pp sample and a ff sample. They both use the full resolution of 24 bits.
They are scaled only internally.
I think you should take this into account and level down your DAW and level up your amplifiers.
One thing is true, however: in "real world" signals don't just add up, like we have to do in the digital domain.
This is due to time contraints we are facing. I have devised a mixer which mixes signals more accurate and gives really amazing results, but this has to happen in the frequency domain and in order to catch enough bass portion the FFT window should be at least 4096 samples wide. But nobody can play with a delay of 100ms !
Thanks
chrisT