Just curious, is it happening with a particular set of samples/patches, or is it generally across the board?
Mahlon
194,047 users have contributed to 42,907 threads and 257,904 posts.
In the past 24 hours, we have 3 new thread(s), 19 new post(s) and 99 new user(s).
That's normal, especially if you perform with loud velocities.
The samples and the VI output gives you as much volume as possible.
So if you record a monophonic solo instrument for example you won't loose quality.
If you are performing an instrument in polyphonic mode with more than one voice you have to lower the output to avoid clipping.
best
Herb
However, that doesn't explain why it clips, normal or not. With 32bit float there should be no clipping. I suspect that somewhere along the line there is fixed processing of some sort going on, so my question would be why, and whether this behaviour is likely to change with future updates?
FWIW, the worst offender is the Harp, which is almost unusable unless the volume is dramatically reduced.
DG
Which 32bit float resolution do you mean?
The players output format is 24bit.
The harp is a "superpolyphonic" instrument. Each forte sample inside the player has the same volume as any epic horn ff sample. So it's quite natural that you lower the volume of the harp drastically to get the right balance to the other orchestral instruments.
On the other side you could get a hot signal if you want to record a piano performed solo passage with the harp.
All this volume balancing is normally done in your sequenzers orchestra templates.
Within a large orchestra setup including powerful percussion, a solo violin for example will be reduced drastically, maybe 20 dB or more to get the right balance between soloviolin and brass section.
In a stringquartett setup you will reduce the violin 3 to 6 dB.
best
Herb
why do you think it _shouldn't_ ???
we live in a finite world with actually everthing having a beginning and an ending.
Did you ever spill over your morning coffee cup because you did not stop pouring in more and more?
What will happen if 3 people start pouring coffee into your cup??
your cup is a container like a 32bit float memory chunk.
it can't spill over, tho - it clips !
chrisT
The reason that I think it shouldn't is because other plugs don't, and my DAW doesn't, except with VI.
I'm just trying to find a workflow that is as near to real world, when I program, as I can. With real orchestration, for example, there is no way that 2 flutes playing is twice as loud as 1 flute, so by reducing the volume in the VI player, I am having to do something which is unnatural. Then I have to raise it again when there is only one flute playing. I would rather do all these volume changes as part of the mixing process, than when programming MIDI, otherwise it feels like I have to do the job twice.
It is no huge deal; I was just interested to know why the VI player is different, in this regard, than other plugs that I work with.
DG
@DG said:
The reason that I think it shouldn't is because other plugs don't, and my DAW doesn't, except with VI.
i can hardly believe this. I am working for 20 years now with DAW's and integrated synths and samplers, and clipping was always an issue as opposed to analog gear where there was a certain headroom which caused distortion. Which was sometimes appealing. Tubes could be driven into the red zone as well as tapes.
Why does clipping occur? If 16 bit integers are being used for encoding you will have a max value of
2 exp 15 which is 0111 1111 1111 1111 in binary digits. Since waves alter the rest goes to the negativ numbers. This is 2's complement encoding. 1111 1111 1111 1111 is the maximum value a 16 bit integer can hold. So what happens if the signal keeps "overshooting"? It bangs its head at the limit. And if a wave does not behave like a wave, ie goingup and down, the 16 bit bucket is full most of the time where it rather should pull back, resulting in a straight line at max value. This causes a very annoying distortion when being brought back to analog domain.
Now with 32 bit float. Does apply to 64 bit doubles or 128 bit long doubles as well.
Internally digital synths operate within these boundaries -1.0 +1.0 . Benefit of float encoding is that errors do accumulate more slowly thus achieving greater precision and better sound in the end. But the limits are still there: -1.0 and 1.0. Exceeding this for considerable time results in clipping of the signal.
You've made an interesting remark that in "real life" adding another flute does not double the volume.
Where? When two flute players meet in the woods for a jam you would hardly hear the second joining in if they play in unison. But let them meet in your bath room !
You see, the container is always an issue !
All our samples are optimized. They come in full range. Technically there is no difference between a pp sample and a ff sample. They both use the full resolution of 24 bits.
They are scaled only internally.
I think you should take this into account and level down your DAW and level up your amplifiers.
One thing is true, however: in "real world" signals don't just add up, like we have to do in the digital domain.
This is due to time contraints we are facing. I have devised a mixer which mixes signals more accurate and gives really amazing results, but this has to happen in the frequency domain and in order to catch enough bass portion the FFT window should be at least 4096 samples wide. But nobody can play with a delay of 100ms !
Thanks
chrisT