Vienna Symphonic Library Forum
Forum Statistics

194,395 users have contributed to 42,918 threads and 257,959 posts.

In the past 24 hours, we have 4 new thread(s), 6 new post(s) and 83 new user(s).

  • transients

    This is not a practical way of looking at it I guess, but I was thinking about the previously discussed problem of loud instantaneous transients jumping out of a mix and how this happens reveals something basically artificial about one of the most basic normal mixing techniques - use of dry signal with wet.

    In reality, even in a small bedroom let alone a concert hall, you never hear dry signal at all. No matter where you are it is 100% wet from that position. The "wet" signal that you hear corresponds to the room characterisitics and that is how it sounds clear or muddy or whatever.

    So ideally, wouldn't the best reverb be either no reverb added, as in the ewqlso, or 100% wet convolution that corresponds exactly to the room characteristics you want? I know that in practice you can't do this (at least on most normal recordings), but then artifiacts like transients would be lessened or maybe eliminated.

  • We discussed this possibility before, and in principle, you're right.

    But as a matter of fact, we hardly hear any "conventional" orchestral recording based on nothing than the main room-microphones. Most of the time there are several close-mics mixed with the main system, and this "hybrid" sound is what we are used to.

    In our case this means to find the right balance between direct signal of the samples and the ambience, and to tame the sharpest transients, either by dynamic processing or by "blurring" them with dense reflections, thus hiding the possible perceptional irritations for the ear.

    We hope to cover this whole issue with the developement of our MIR-engine, BTW. Up to now, it's not very likely that one could achieve a convincing "wet only" mix with synthetic reverb, in respect of the sound itself as well as the imaging. With the MIR, this could change dramatically. :-]

    ... and of course, there are many possible situations where you actually would _want_ to hear the intimate sound of an instrument, with all its peculiarities, so it's good that they are there in the first place. It's easy to add ambience - compared to the task to take it away once it is recorded. ;-]

    /Dietz - Vienna Symphonic Library

    /Dietz - Vienna Symphonic Library
  • last edited
    last edited

    @Another User said:

    In reality, even in a small bedroom let alone a concert hall, you never hear dry signal at all


    You're right: most of what we hear in 'real life' is indirect - reflected off room boundaries. But the problem starts when trying to reproduce music (or any other sound) on speakers. Here you need to 'help' the listener: there just isn't enough spatial information for them to be able to 'zoom in' to the source. That's where spot mics come in on conventional recordings, or at least mic positioning that gets a significant proportion of direct sound off the instruments. The equivalent to that in what we're doing with samples is the direct 'dry' component in the mix. So even a (from the user's point of view) '100% wet' solution could only be effective if it included a significant proportion of direct signal. 'dry' is not a bad word - the dry component simply represents the direct path from the source to the listener. We add the reflections to that, to complete the picture.

    Does that make sense?

    All the best,

    Simon

  • Yes, that makes sense. And of course I've always done mixing with a lot of dry signal in the old fashioned way. But I've been noticing with convolution that it is getting close to possible to completely envelope the signal in the room characteristics and have it sound good. Because you select those room characteristics for exactly the sound you want and don't need to "dilute" them.

    I realize a lot of this is not generally useable in a real-world setting. But the EWQLSO is actually doing this very same thing I'm talking about though not by adding it, but instead by having the concert hall's ambiance the only reverb audible with the release samples, etc. Unfortunately you are stuck with that sound though, which I why I still think the VSL approach of as dry as possible is better, especially taking the MIR project into account. As Dietz points out it will probably change a lot of things.

    By the way - again impractically - could convolution be used 100% wet right now for a normal mix if you had impulses that were recorded from different locations? For example if you took several recordings in the same concert hall, using an impulse back among the seats for the string ensembles for instance, but also up on the stage close to the position of the players for soloists within that context. I imagine that sound would be "drier" even though it was used 100% wet. Though none of the impulses I've gotten are done in this way. They are always just one for each space.

  • last edited
    last edited

    @William said:

    could convolution be used 100% wet right now for a normal mix if you had impulses that were recorded from different locations?


    That could be very interesting, but only if the impulses had the initial direct 'bang' included (Normally, reverb impulses have the direct signal removed, and the first sound you hear is the first reflection.) - because then we'd have the direct path from source to microphone (from different positions in the hall) in there too. But it would have to be extremely well recorded, and I can still imagine there being a certain amount of signal degradation. But then again, impulses can be edited, modelled, optimized (as the E.Cholakis impulses show), so with some help...? [:)]
    I guess we're treading here on MIR territory, and probably Dietz doesn't want to tell us too much, just now [[;)]] ...!

    Interesting area though!
    All the best,
    Simon

  • [H]

    /Dietz

    /Dietz - Vienna Symphonic Library
  • last edited
    last edited

    @Dietz said:

    [H]

    /Dietz



    Yes, I thought so... [:D]

  • While we're on the topic of IR's...

    There is this "feature" that still "bugs" me. I use Acoustic Mirror (with Ernest's Pure Space quality impulses). I discovered a while ago that the IR approach has absolutely no lateral modeling of Early Reflections. A sound coming from one channel has 0% effect in the other channel.

    So, it seems to me, that convolution reverb is based on calculations that blur a signal only in time but not in space... Because the impulse has frequency-related delays per channel, we perceive the result as stereo. I think it can be somewhat compared to the comb-filter approach for stereo-izing mono sounds, but in this case with a very fast changing comb-filter.

    The typical workaround (or: "correct" way of using this "feature" [[;)]] ) is to feed a nearly mono mix into the convolver and to mix the 100% wet output with the original stereo mix (or submix). Essentially: inserting the lateral signals into the middle of the ambience "space".

    Try it for yourself, pan an instrument hard left and run straight into Acoustic Mirror or SIR. The result will be a disappointing left-only ambient sound. Acoustic Mirrors slider for manipulating the width works exactly on the wrong part: it narrows the impulse, not the imput...

    Maybe Dietz can share his thoughts on this anomaly and maybe if MIR will have workaround for this by default.

    At this moment I think convolution is great for creating the reverb tails, but you still need to ceate early reflections first with other tools, such as Cakewalk Soundstage (nice) or TrueVerb (ER's only - usable). The less tail you get from this stage, the better of course. I have heard some demos from Maarten without any reverb applied, and they showed that VSL seems to have just the right amount of ER's. So, from my perspective, convolution would be exactly the right approach for VSL. For "very dry" libraries, such as DDSW and DDSB you still have to emulate ER's first, otherwise you will get weird separation between the instrument and the ambience.

    Cheers,

    Peter Roos

  • This is one of the more technical reasons that let to the idea of the MIR, yes. Due to the multitude of single IRs the hard seperation should not pose any problems any more with our concept. - Phasing and combfiltering _is_ a problem, of course, but our R&D-guys seem to have solutions for this.

    ***

    Please understand that I won't talk about too many details publicly, not because I'd fear you guys to steal our ideas, but to keep the level of wrong expectations of our customers at a minimum. (There's an Austrian proverb, saying "Ãœber ungelegte Eier soll man nicht gackern" ... have fun with the automated translation :-] ...)

    /Dietz - Vienna Symphonic Library

    /Dietz - Vienna Symphonic Library
  • Hehe, thanks Dietz,

    In Holland we have a proverb: "Eén ei is geen ei" (One egg is no egg). I still don't know what it means [:D]
    Maybe it applies [[;)]]

  • I think I know what that proverb means - unless you have a dozen eggs you don't have enough to really do anything with... or something like that (?)

    I used the Cholakis convolution on my demos (which no one has noticed :cry[:)] and at least I didn't hear a problem though I see what you're talking about.

  • last edited
    last edited

    @William said:

    on my demos (which no one has noticed :cry[:)]


    No, when did they appear? I'll check them out tomorrow!

    re. the business of left/right separation: I don't totally understand the problem, unless of course the source is mono. On a conventional recording, ambient mics or main system mics (spaced pair, or whatever) are generally kept panned hard left/right too. This is the equivalent of the stereo impulse, and the convolution software not allowing any left-right crosstalk. Here too, it would be disappointing if you just took the left half of the ambience.
    I guess the behaviour of the left versus right in an impulse depends entirely on how it was recorded - there will be methods that are mono-compatible, and others that are less so - but the convolution software should keep the channel separation, surely...
    I'm very excited about the prospect of impulses recorded at different locations within the hall having the capacity to introduce genuine depth into the recording (ie. phase-based depth) instead of just reverb-induced depth (which can work if phase coherent). We will see...