Vienna Symphonic Library Forum
Forum Statistics

196,754 users have contributed to 43,031 threads and 258,436 posts.

In the past 24 hours, we have 4 new thread(s), 13 new post(s) and 91 new user(s).

  • ... which, of course, is just one of the possible ways of working.

    To Calaf: Recording an orchestra in a hall for a CD-production, you will find only very few engineers going for nothing than the ambient sound. Most of the time, you will hear them using main microphone-system (... which would be the "IR-only"-part, in our virtual orchestra), adding several close-mics for single instruments and/or groups, mixed in only a few dB below, often delayed in relation to the main system.

    Taking into account that the instruments of our Library are anything but "dry" (in the sense of un-echoic), you will find yourself adding the "wet" return of the reverb a little bit lower in volume than the sum of the direct signals.

    HTH,

    /Dietz - Vienna Symphonic Library

    /Dietz - Vienna Symphonic Library
  • last edited
    last edited

    @evanevans said:

    And if you don't like how it sounds, then you need to try another "Sampled Space" if doing some simple EQ isn't cutting it for you. Adding any DRY signal to an IR processed signal will only diminish the realism.


    I heard a lot demos that created "depth" only with different wet/dry ratios effectively. In fact it could be very difficult to create a piece with the common impulses (I tested only free ones yet, I doubt the other yet availible vary greatly) having it only at 100% wet and not to drown the piece in hall mud. And it is also not a diminishing factor IMO, if e.g. the far mics from SAMs are given another glance with a tad more (that means more dry, a lot less wet) of hall sound.

    So my question is: How to create different depth with that one impulse I got from e.g. the Concertgebouw - not diminishing the quality in your opinion, Evan? How to mix SAM's (put in library of your favor) far or close mics with VSL then using impulses I like?

    PolarBear

  • I think Evan is probably right from a theoretical point of view, but his remarks about using impulses 100% is nonsense in most practical situations.

    I have never been able to use any impulse purely 100% wet (I also have and use the excellent impulses from Ernest Cholakis).

    Evan, I think again you are using the "arguments from a real pro" approach - that, again, is not in line with how things work in practise.

    Combining dry signals with more ambient signals is not wrong at all, it is being done in most recording where ambient and close mics are used. Maybe some additional delay settings are required to merge the sounds well.

    My 2 cts.

    Peter
    www.PeterRoos.com

  • or may be we should record "close mics" impulses reponses, there are a lot of impulses responses taken from a point very far from the mic, i think we should do for one position , one impulse reponse close to this position, and one very far from the position, so we could simulate what they are doing in studio (close mics and ambients mics mixing) instead of using one impulse and playing with the dry/wet ratio

  • last edited
    last edited

    @PolarBear said:


    So my question is: How to create different depth with that one impulse I got from e.g. the Concertgebouw - not diminishing the quality in your opinion, Evan? How to mix SAM's (put in library of your favor) far or close mics with VSL then using impulses I like?
    PolarBear

    Hi PolarBear
    Reverb Adjustments or
    How to position instruments in a room with SIR or other reverbs?

    A possible way is to consider some physical facts:

    1) The far an instrument in a room the more indirect sound you receive as a listener.
    This parameter you mix with the "WET/DRY-balance"

    2) The far an instrument in a room the more delay of the indirect signal you have.
    This parameter you mix with the "DELAY-TIME".
    Theory of sound and delay: The speed of sound is around 300m/s > so for every meter you can calculate 3.33ms of delay at 5m=17ms, 10m=34ms 15m=50ms ... The delay between the directsound and the delayd reflections gives us the roomimpression. Try different delays for expectetd results (realistic delays would be between 10 and 50 ms).

    3) The far an instrument in a room the more high and low frequencies are cut.
    This parameter depends on the room physics. Carpets? Damped walls? Audience?
    An instrument in a distance of 15m never sounds with the brilliance of an instrument just played beside you. So with instruments deep in a room you should always cut the high and low frequencies less or more.
    Also this parameter you can often change with reverbs.

    4) The far an instrument in a room the "mono" it sounds (the part of direct sound).
    Left and right positions you adjust with the pan-parameter of your host-program.
    Most times you need to make the signal to mono or near mono. You could do this for example with a VST-stereo-spread-effect.
    Lots of usual host-programs have built in this possibility.
    The reason is, that also in reality the direct signal is near mono with an instrument played far away. But the reflections (wet signals) are in stereo... 5.1...7.1 or what ever.

    How to adjust all these main facts >>>> With your ears!


    all the best
    Beat Kaufmann

    - Tips & Tricks while using Samples of VSL.. see at: https://www.beat-kaufmann.com/vitutorials/ - Tutorial "Mixing an Orchestra": https://www.beat-kaufmann.com/mixing-an-orchestra/
  • last edited
    last edited
    My information was only that, nothing more. Rules are meant to be broken. Better to have the knowledge and know how to break it, then not know you've broken the rules.

    Happy mixing!

    Evan Evans

    P.S. Dry / Wet delay time, otherwise known as PREDELAY, helps somewhat, however this is only done artificially with IR based FX. The best way is to sample the impulse from different locations on the stage into the same "receiving" microphone(s) at their fixed location (to emulate your ears). For some reason AUDIOEASE ALTIVERB™ has failed to realize this very simple concept and instead has been rampantly sampling the impulse response from one fixed stage position but recorded into different microphone positions in the audience area. This was a backwards approach to what we naturally can hear with only two ears. Our two ears remain in a fixed position. It's the instrument's emination positions which change. Not the recording location.

    WOW THAT'S A LONG FOOTNOTE.

  • Just a side-note on "The Concertgebouw impulses" mentioned in this thread: this is not the famous Amsterdam Concertgebouw, but a new building in Brugge, Belgium (before everyone believes that NoiseVault presents impulses from a top-3 concerthall for free...)

    Cheers,

    Peter

  • Beat--

    Thanks for your mini-tutorial. This thread is quite valuable.

    --Jay

  • Wow Beat, that seems to sum it up quite good!

    I do not disagree with you, Evan, that using impulses 100% wet is nonsense, but as you elaborated yourself the current availible impulses let you not choose to do this. You also quite elegantly overlooked the question for applying the same impulse to different material.

    PolarBear

    Short footnote: How come rules are secrets? [:)]

  • last edited
    last edited

    @Beat Kaufmann said:



    2) The far an instrument in a room the more delay of the indirect signal you have.
    This parameter you mix with the "DELAY-TIME".
    Help: The speed of sound is 300m/s > so for every meter you can calculate 3.33ms of delay at 5m=17ms, 10m=34ms 15m=50ms ... For example you want to place the Horn section in a distance of 20m > your delay time for the wet-signal should be around 20 x 3.3ms = 66ms).


    Hello Beat and others,

    I am sorry to say, but I believe your argument is not correct here and the advise should be actually opposite from what you conclude.

    You say that with farther away sound sources the indirect signal is delayed MORE. For a horn section you propose to delay the indirect sound with 66 msec. However, you forget that not only the first reflections should arrive after 66 msec with the listener, but ALSO the direct sound!!

    In fact, the farther away sound sources (brass, percussion), the LESS delay difference there should be between direct and indirect sound, not MORE. If this is not clear, try to draw a concert stage with mics at 10 m from the stage. Now draw lines directly from the strings to the mics, and draw in some first order reflections and estimate the time differences between direct and indirect sounds. You will conclude that with close up sources, the direct sound will hit the mics quite earlier than the first reflections. The distance from strings to mics is probably around the same as strings to walls (10-15m), so after the direct sound hits the mics, the first reclections still have to travel some 14-20m.

    Now do the same drawing for sound sources in the back (say snare drum). You will have to conclude that there is hardly any difference in the arrival of direct and indirect sound at the mics. The distant sound sources are relatively close to walls, so there is hardly a time difference between direct sound and first order reflections. This is all simple maths with angles, triangles, etc.

    Now of course we do not want to delay direct sounds as well, although this might be better from a realistic point of view. Still, we should try to maintain the relative delay between direct sound and first reflections. So in the example from the French Horns, either predelay the direct sound with 50 msec and the reflections with 65 msec, or just predelay the indirect sound with 15 msec.

    So, for farther away sources you should use no or very minor pre-delays between dry and wet signals, and for close-up sources, you can use longer pre-delays.

    I hope my arguments are clear, if not, ask (or shoot) [;)]

    Peter
    www.PeterRoos.com

  • Peter is right. Use predelays for a dedicated reverb if you want to put a certain Instrument _in front_ of others.

    /Dietz - Vienna Symphonic Library

    /Dietz - Vienna Symphonic Library
  • Hello again
    Yes, you are right Peter, it’s correct that also the direct sound played from an instrument in a distance of 20m has a delay of around 66ms.
    Yes, you are right Peter the delay-time of the very first reflected indirect sound from an instrument played in a distance of 20m is not (only)66ms.
    Yes, it looks like the only truth what I’ve written. So I adapted the answer to PolarBear in a more common version. I hope you can agree in this way.
    Peter and Dietz thank you for correcting me.

    Here some more additions.
    How big we feel a room depends on the delay of the very first reflections in relation to the direct sounds we hear. That’s a common fact we can read in every acoustic-book.
    Supposed you take a room 20x20m, the shortest possible way for an indirect sound-signal from the instrument to a listener is around 28m (2x 14m).
    The difference to 20m is in this case 8m.
    <a href=http://homepage.hispeed.ch/beat.kaufmann/Reflections.jpg">
    That would mean a delay of ~26ms. But there are others of those delayed reflections. After 50ms, 70ms, 80ms… All these reflections reach the listener. In the end he will recognice thousands of them. The summary of all these following reflections we call reverb. Maybe an average predelay of 50ms – 70ms for the very first reflections in this room and the showed situation is a good value.

    Supposed you take the same room 20x20m, but with a sound-sorce distance of 10m, the shortest possible way for an indirect sound-signal from the instrument to the listener is around 24m (2x 12m). The difference to 10m is in this case 14m. Peter is absolutely right when we spoke about the very first r e f l e c t i o n (singular):
    http://homepage.hispeed.ch/beat.kaufmann/Reflections2.jpg
    That would mean a delay of ~46ms. I believe the difference to the upper example is, that lots of the next very first reflections have a delay near 46 ms too. So an average for this very first reflections will not be as high as in the upper example...???
    ...Peter, if your vote tells us the full truth we have to say:
    The near an instrument in a room the more delay of the indirect signal you have ?? My meaning is that the instrument sounds allways near but the roomsize seems to increase??
    Please forgive me to be so provocative. [[[;)]]]
    The story of reflections and reverbs is very very complex. This might be the reason too, that such a lot of reverbs and reverb-concepts are on the market.

    With SIR we have 2 - 3 knobs... and
    there were guys who wanted help, not irritation
    The question was:
    “How to position instruments in a room?”
    The following summary could be useful:

    Dietz:
    Use predelays for a dedicated reverb if you want to put a certain instrument in front (or behind?) of others.
    Evan: (summarized)
    Use near sampled impulses for instruments which should be placed and played near you and vice versa. In other words: Use the impulses as the were recorded. Additional note: I recommend using the dry direct-signal as well. The effect of depth only works properly in combination with this direct-signal.
    Peter:
    Read his interesting explanations about mics, stages, delays and recordings for having more background.
    Beat:
    Take your ears, your reverb, different impulses, different predelays, EQs, time and have ago. In the end our ears together with our brain generate a room-impression. And only this impression counts - not the technical facts.

    All the best
    Beat
    Peter and Dietz > Please believe me I DON'T WANT A "SPACE"-WAR! [[[;)]]]

    - Tips & Tricks while using Samples of VSL.. see at: https://www.beat-kaufmann.com/vitutorials/ - Tutorial "Mixing an Orchestra": https://www.beat-kaufmann.com/mixing-an-orchestra/
  • The answer is clear, but not simple: The closer you are to a signal source, the longer the delay will be between the direct signal and the perceived appearance of the dense reverb-tail. The actual delay depends on the room size, the density on the surfaces of wall, floor, ceiling, and furniture, audience and so on.

    This is _not_ true for the early-reflections part, of course, as the first reflections will come from the floor right in front of you - with just a very small delay in respect to the direct sound. In certain cases, the time-window of discernable ERs will be so small due to the character of the room that you will get the impression of an almost "immediate" reverb-tail, although you are close to the source.

    Clear as mud, eh? ;-]

    HTH,

    /Dietz - Vienna Symphonic Library

    /Dietz - Vienna Symphonic Library
  • last edited
    last edited

    @Dietz said:


    Clear as mud, eh? ;-]
    Dietz

    HELP!! HEEELLLLLP_P_P__P___P_____p_______ .. . .

    I'm just (48,456ms ago) drowned in a mudy swamp of reflected reverb impulses...
    ...When I'll be up in heaven I will ask for playing my harp "dry"! [[;)]]


    all the best down on earth
    Beat

    - Tips & Tricks while using Samples of VSL.. see at: https://www.beat-kaufmann.com/vitutorials/ - Tutorial "Mixing an Orchestra": https://www.beat-kaufmann.com/mixing-an-orchestra/
  • I use convolution reverbs on send/returns. They have their own wet/dry control, but it's easier if you have individual control over each channel in my opinion.

    Also, I don't understand why moving the instrument is more correct than moving the mics, Mr. Evan. You get a closer ambience recording and a farther one. Why is that backwards? You can put the farther one in surround speakers, for example, or you can use the two separately to simulate different depths.

  • I wasn't sure if this should go here or in the gigastudio topics but it is mixing:

    How does Gigapulse factor into this dry/wet discussion? The question is important because you could use dry signal to avoid multiple instances, which use up the CPU real fast. I'm wondering what Dietz or other mixing engineers feel is better - to use the "perspective" to achieve clearer sound or mix in dry signal? I am a little leery of mixing in dry signal, because isn't it contradicting the entire idea of the convolution - to place the source into a new space?

  • From a dogmatic point of view, it may seem to be a contradiction to mix the dry part with the convoluted "wet" signal. But in real life, recording engineers will not seldom combine a main microphone system (i.e. the convolution part, in our scenario) with some amount of closer mic'ed group- or soioist microphones (i.e. the direct signal).

    HTH,

    /Dietz - Vienna Symphonic Library

    /Dietz - Vienna Symphonic Library
  • Thanks Dietz! That is a very interesting analogy...