Vienna Symphonic Library Forum
Forum Statistics

194,727 users have contributed to 42,932 threads and 258,001 posts.

In the past 24 hours, we have 7 new thread(s), 19 new post(s) and 110 new user(s).

  • last edited
    last edited

    @scoredfilms said:

    ...It places a dry recording on top of a room, or on the side of it. But I feel like the instrument is never truly "in" that sonic space. It doesn't have a "whole" sound you get from a live recording.

    It is true that real recorded instruments in real rooms appear in an other way than mixes with samples. With real recordings we have time delays between the microphones which give us this nice room and "spacy" feeling. Specially recordings with less microphones and recorded in AB-technique can lead to such nice space feelings.

    With samples we pan the signals from left to the right which means that we mainly have a different volume between left and right. Even if we have true stereo reverbs (which are producing different reverb signatures for the right and the left channel) it is finally only a simulation of the reality. 

    But there is a trick: You can get some of this airy and roomy feeling of a real recording sessions by choosing different depths for different instrument sections. Further you can overdo these different depths a bit no problem. This trick is a simulation as well but can lead to a more transparent and a more interesting mix.

    Listen to this example and observe the different depths (close and far instruments). The whole mix appears roomy and airy even if there are only those mentioned pannings, doesn't it?

    Could be that you are looking for this "sound"...?

    If yes then your problem is a matter of depth... and not a matter of "hall" as you pointed it out above.

    Beat


    - Tips & Tricks while using Samples of VSL.. see at: https://www.beat-kaufmann.com/vitutorials/ - Tutorial "Mixing an Orchestra": https://www.beat-kaufmann.com/mixing-an-orchestra/
  • Of course convolution reverberation only provides a model of a specific acoustic space.  Like any model it's a simplification of the real world.  Convolution assumes a linear, time-invariant 'world' that it then can model quite accurately.  

    But in the real world all kinds of non-linearities creep in, the way the materials of the room respond to different levels of sound is the most obvious one that would explain the differences between lower and higher level sounds.  There's also apparently a huge challenge in getting impulse responses that have the best (greatest) signal-to-noise characteristic.  Any residual noise left in the impulse will created will reintroduced when the model is excited by a sound.  I wonder if that also might explain why a louder sound will bring up a different sounding revererant response?  

    Either of these, a non-linear space treated by a model assuming linearity, or the non-linearity of the impulse measuring system itself, might account for the differences Sean is demonstrating.  

    Beat's suggestion of mixing the best of both worlds is a good one and reminds us, again, that it's our ears that are the final arbiter of quality, not slavish adherence to a single idea just because it's theoretically better, or worse yet, the hip thing of the moment.


  • A nice example you provided Sean. I tried to match it as closely as possible with 'artificial' reverb. As there obviously is no dry example for the brass I had to use samples to recreate it. It's not as nice, but don't let that distract you. It's about the reverb after all.

    That's as close as I got in reasonable time:

    http://goo.gl/GFTVP1

    The original is a bit wider, which in hindsight I should have matched more closely in the reverb tail. Here's a quick and dirty after the fact solution (I widened the mix of the audio file):

     

    http://goo.gl/PIj9pU


  • last edited
    last edited

    @Another User said:

    There's also apparently a huge challenge in getting impulse responses that have the best (greatest) signal-to-noise characteristic.  Any residual noise left in the impulse will created will reintroduced when the model is excited by a sound. 

    That's true, but with some effort we are able to capture IRs with a signal-to-noise ratio better than the range covered by most average A/D-converters.

    Kind regards,


    /Dietz - Vienna Symphonic Library
  • last edited
    last edited

    @Dominique said:

    A nice example you provided Sean. I tried to match it as closely as possible with 'artificial' reverb. As there obviously is no dry example for the brass I had to use samples to recreate it. It's not as nice, but don't let that distract you. It's about the reverb after all.

     

    Dominique,

    Thanks, that's a great example for comparison. I agree that this isn't a samples issue but a reverb thing. There are some sample differences of course. But both are great and equally usable. My example has less high and a bit more low end in it. So I'm also keeping in mind to ignore that as well.

    The biggest thing I noticed was that the early reflection in the room seems to have a lot of excitement. The entire tail sounds like any verb I think would 'continue' the sound. But at the very beginning, I'm listening to the way the low and high seem to interact with the space. There is a vibrance there that I feel is missing in the dry-to-verb example.

    I'm sure it's possible to emulate. But until I can peg down what it is, it's hard to talk about how. I wonder if some kind of processesing needs to be done on the early reflection. Maybe even based on the instrument. Again, call me crazy. I'm sure Dietz and Beat think I'm nuts! lol That's just what my ears tell me. Sometimes translating ears to informed knowledge and then to language is hard.

     

    Beat, for the record... I love listening to all of your examples. 😊 I'm still not convinced it's a depth issue.

    -Sean


  • Dominique,

    One more thing...

     

    Even with a solo flute this is noticeable to me. I am satisfied with high solo woodwinds from VSL when placed in verb. But satisfied doesn't mean the same problem isn't present. It's just not as much of an issue as it is with other instruments.

    The instrument when recorded wet seems to marry the surfaces to create a unique shape, which then tails into the room. The immediate response of the room still is different on a solo flute. But it works. However, when I try to blend woodwinds into an ensemble or section, I feel like the lack of this shape prevents a natural blend. Because of that I don't use the ensemble woodwinds at all anymore, nor do I use the woodwinds to create a section. I stricly use them as solo instruments. Granted, I don't have any of the dimension series so that jury is still out. But it's hard to convince myself to go dimension when I feel like I still have a reverb issue. Plus I really don't have the computer for it. Though that's changing soon. :)

    I just wanted to describe a bit more of what I've noticed. I fully anticipate being called crazy after this! lol But I think it helps to illustrate a different angle to the same problem.

     

    -Sean


  • One more thing... again...

     

    I forgot that I demoed MIR before MIR 24 happened. SO.... I have another demo license! Ba ha ha!!! I'll do some tests in the next couple days when I have more time and I'll report back if I find anything helpful. :)

     

    -Sean


  • Very instructive thread.

     

    Yet, for me, the problem is a musical one : why oh why does MIR sound harsh with strings ?

    Nobody on convolution reverb signature sound ?

     

    Respetc to all.

    Stephane.


  • last edited
    last edited

    @Stephane Collin said:

    Yet, for me, the problem is a musical one : why oh why does MIR sound harsh with strings ?

    Nobody on convolution reverb signature sound ?

     

    Stephane,

     

    After playing with MIR at least a little bit I'm starting to wonder if there aren't multiple contributing factors here. The hall, the samples, the settings, the EQ, the, the, the...

     

    Correction:

    I couldn't quite replicate what Dominique did. Though he used the hybrid reverb and I didn't. His was pretty close to my example, except the difference I outlined earlier. Yet, when I loaded MIR and Teldex I couldn't get a result I was happy with, with almost any VSL instrument. Yet, with another dry instrument I was very pleased. Keep in mind, I was pretty happy with Domnique's example. It was still missing one thing I really wanted. But it was much closer than anything I did on my own. And the fact that one of my other dry instruments sounded great, makes me wonder if I'm simply not applying effects properly to VSL to get a decent result. I don't own Vienna Suite and can't justify that right now. I can justify MIR for some of my own recordings alone. But I really want to get my VSL samples where I want. I'm using different plugins on the VSL suit and I see improvements... but nothing like Dominique's example. And even then, I feel his example still misses something. So I could get Vienna Suite, but I still feel like I'd have a bit more to solve.

     

    I should add that I'm thinking MIR may not color the sound in any noticeable way. My own dry test was very impressive, Duke Ellington good. So I'm thinking maybe if a specific instrument has some character that isn't desireable, MIR simply amplifies it. I noticed that with trombones I was very pleased with the lower staccato velocity in MIR, and practically disgusted with the higher velocity. That to me suggests that either the sample lacks something, I'm not processing the sample correctly, or that MIR should respond differently based on the dry signal.

     

    I feel like this mystery just exploded into something far more complex for what my brain wants to process today. oy.

     

    -Sean


  • Too bad I don't have MIR. But it's informative to try to approximate your example nontheless. At least for me, that is :-) Anyway, I see what you mean about the ERs being more vibrant in your example. That was actually a very good hint. I switched to an IR with more pronounced ERs (wow, does that sound geeky? :-)), and I like the result much better then my former attempt. The timbre doesn't approximate your example anymore (it's 'clearer', less warm), and in my eagerness to demonstrate the new ERs I may have pushed the instruments further back. But it's more a test to see whether it comes closer to what you thought was previously missing: a natural vibrancy in the ERs.

    This one sounds pretty good to me:

    http://goo.gl/MzSlRg

     

    This one has even more ERs. It's almost a bit over the top for my taste. But could be that it has what you are looking for:

    http://goo.gl/o9BkDD

     

    As an aside: In your example I can hear the reflections coming from the sidewalls really well, especially on the second note. I like that. That's less so in my examples. The reflections from the back wall are stronger there. In return I think that the instruments positions are slightly better detectable in mine.


  • last edited
    last edited

    @Another User said:

    The timbre doesn't approximate your example anymore (it's 'clearer', less warm), and in my eagerness to demonstrate the new ERs I may have pushed the instruments further back. But it's more a test to see whether it comes closer to what you thought was previously missing: a natural vibrancy in the ERs.

    Okay, so I hear what you mean but in some ways that's more "tube" ish to me than like a hall. Although I'm noticing something else now:

    • Mine: Low frequencies present in the sound source and in the reflections.
    • Yours: Low frequencies present in the sound source, but less in the reflections.

    The timbre of the reflection seems to have more high end in it. It's almost like there is more attack in the verb. I try to get the same low end out of VSL, but in doing so I feel like I'm crushing the audio so much that 1) I'm loosing the raw and natural vibrance of the attack and 2) I'm murdering an innocent and helpless waveform. It sounds aweful.

    Are you EQ'ing the verb or just the dry sample?

    -Sean


  • last edited
    last edited

    @Dietz said:

    That's true, but with some effort we are able to capture IRs with a signal-to-noise ratio better than the range covered by most average A/D-converters.

    Kind regards,

    [/quote]

    And that's why I'm so impressed with the sound of MIR so far!  I've done a cursory experiment with a surround mix using MIR and the added sense of spaciousness is pretty amazing.  The only reason I'm not moving to surround full-time is that I've currently only got one machine to run the orchestra with and asking MIR to process five or six channels of anything but a small ensemble is overwhelming the system.

    Best,

    Kenneth.


  • Oh, low end it is your looking for. Sorry, somehow I missed that part. In that case the last examples went in a totally wrong direction. But more low end is very much possible. Here's another attempt:

    http://goo.gl/4q4yq6

     

    As the topic of this thread is reverb I exclusively used reverb plugins on all examples, no other effects. I even didn't use eq, neither for the dry signal nor the reverb. I achieved the different results simply by choosing different ERs, and adjusting some parameters of the reverb (stage position, volume, tail width etc.).


  • last edited
    last edited

    @Dominique said:

    Oh, low end it is your looking for. Sorry, somehow I missed that part. In that case the last examples went in a totally wrong direction. But more low end is very much possible. Here's another attempt:

     

    lol, I do like the amount of low in my example, but I'm not after that in this case. I'm after getting all my VSL instruments to sound as good (and relatively with a degree of consistently) when processed through reverb. 

    http://goo.gl/1ryfxn

    New File - "Dominique's - Somewhere in the Middle"

     

    I felt like yours had a bit too much boomy low so I cut it down a bit. Now I feel like it's closer to mine. However, notice how the low end sounds fine but the verb isn't giving us as much high end. I imagine it's possible that could have to do with the IR you put it in. But mine still sounds more natural to me because I'm hearing the lows and highs represented equally as it sounded from the instrument itself. That sound continued into the hall. The dry examples seem to suggest the dry instruments aren't accurately represented when processed through/against an IR.

     

    I'll try to add a better example later today now that I have an MIR demo to test it. But I think it's pretty apparent in comparing my tweaked version of yours to my Brass #4. Thanks for the examples. It's at least very helpful in comparing. 😊

     

    -Sean


  • last edited
    last edited

    Hi Sean

    Hi Dominique

    Just for not being dissapointed in the end... even if you both will find a reverb which will sound to your tast(s) it doesn't mean that it will finally make the race with a whole orchestra (strings, percussion etc.)

    😎

    Beat

    PS. Because I'm doing recordings I always have tracks to compare my sample mixes with real tracks... and must say that the effect isn't born yet which can lift up the sample mixes to those of real recordings. But as I mentioned above we can come close by chosing different depths  (which is mainly a matter of ERs by the way😉)


    - Tips & Tricks while using Samples of VSL.. see at: https://www.beat-kaufmann.com/vitutorials/ - Tutorial "Mixing an Orchestra": https://www.beat-kaufmann.com/mixing-an-orchestra/
  • last edited
    last edited

    Thank you for the warning, Beat. But no worries, I have my experiences with sampled and real orchestras. There's always a trade off. Adding reverb to dry samples isn't the same as samples recorded with baked in reverb. Which in turn aren't the same as a recording of a real orchestra. Dry samples won't give you the same roomy sound as wet samples. On the other hand, they are more flexible, and the legato is usually more convincing with the dry approach.

    That aside it was fun to test how close I could come to a very specific short snippet. And now I'm curious to hear what you can do with MIR, Sean 😊


  • Beat,

     

    For the record, my first recording session was for a solo cello. My skills are winds and piano so I wasn't playing. It wasn't my first experience with a cello of course, but having heard the track so many times as a mock-up... it basically ruined me for life. Cello samples are terrible. All of them, every library, every company. Most of the time I just can't force myself to mock them up anymore.

     

    Dominque,

     

    I uploaded a track (#6) with multiple examples. Same folder: http://goo.gl/1ryfxn

    1. VSL Solo Bone (Dry)
    2. VSL Solo Bone (MIR)
    3. VSL Bone Ens (MIR)
    4. Other Bone Ens (recorded wet)
    5. Mix of VSL and other Bones (VSL in MIR)
    6. Two lower notes, VSL in MIR then wet Bones

     

    The ensemble sounds great and both ensembles mix pretty well. The single bone though isn't useable IMO. Which is odd cause I usually think the solo instruments sound great. Although lower velocities in the single bone sound fine to me. I tried two different dry Cellos. They were both aweful. However, I'm prejudiced as is clearly stated in my comment to Beat. ;) I have a hard time with any strings in verb in general too. So again, prejudiced.

     

    Sorry for the delay. Still finishing up more important work for a film so I'm only exporting examples when I need a break.

     

    -Sean


  • These are definitely not the right reverb settings for the solo trombone. There is way too much sizzle in the reverb. I guess it's the high frequency content that is captured because of close micing. Wet samples are naturally recorded with some distance, and some of the high frequency content has been absorbed when the sound travelled through the air to the mic. I don't know how the VSL bones ensemble has been miced, but it may be that it was from a further distance than the solo bone. That would explain why the ensemble sounds fine with the same reverb settings. The lows are different too between VSL and wet, but the sizzle is the disturbing part, so we should try to fix that.

    We need to push the solo bone further back. Right now it is a close miced, dry sound with reverb on top. Does MIR have an 'air absorption' filter? If so, make sure that it is switched on. Experiment with lowering that dry signal as well. In this case when applying ERs I would even try to have it set at 100% wet (just the ERs). I don't know if that's possible in MIR. In any case pushing the instrument back and lowering the dry signal should give you better results. If it's still not quite there I'd use an eq. For a starting point, lower everything above 5kHz or so, and maybe boost a little between 200Hz and 500Hz.


  • last edited
    last edited

    @Dominique said:

    [...] Does MIR have an 'air absorption' filter? [...]

    This would be a typical "algorithmic reverberation" feature. 😊 The air absorption due to the distance between source and listener (i.e. Main Mic) in a MIR Venue is the real one, coming from the halls were the multi impulse response sets have been captured.

    I'd say it's more important to have the proper Instrument Profile activated, which includes the frequency-dependent directivity profiles for each Vienna Instrument. The implementation of these measurements also takes into account the recording setup used for the sampling session. (The General Purpose profiles for third-party signals only offer an aproximation.)

    In addition, the hand-crafted Instrument Characters might be helpful, too (e.g. the "Distant" or "Warm" settings, in the case you were discussing).

    Kind regards,


    /Dietz - Vienna Symphonic Library
  • last edited
    last edited

    @Dietz said:

    I'd say it's more important to have the proper Instrument Profile activated, which includes the frequency-dependent directivity profiles for each Vienna Instrument. The implementation of these measurements also takes into account the recording setup used for the sampling session. (The General Purpose profiles for third-party signals only offer an aproximation.)

    In addition, the hand-crafted Instrument Characters might be helpful, too (e.g. the "Distant" or "Warm" settings, in the case you werde discussing).

     

    Dietz,

    Well dang it! I looked over all of MIR's setting on the right panel. I had a hay day tweaking and tweaking. But apparently once I scrolled down I forgot how to scroll up. The instrument profile and the character were literally the ONLY 2 things I didn't touch. Go figure.

    I loaded the bone profile and it sounds better, but still not right to me. Ironically enough I was actually happier the generic profile on the bone ensemble, although only slightly.

    As for the Room, I did EQ the room, bringing down some highs. But without that the solo bone sounds even more off. Although I only EQ'd the room to get the two examples as close as I could for comparison.

     

    Dominique,

    Regarding instrument placement: believe it or not, the solo bone is in the same spot as the ens, offset by what I estimate to be '1 seat' distance to the left. Although I agree it actually does sound closer. I noticed that early on and tried adjusting the dry/wet fader but I was happier with 50/50 in the end. I'm using the Teldex wide and the bones are almost at the back, center, just to the right a few 'seats'.

     

    Same link: http://goo.gl/1ryfxn

    I added a #7 and #8, 7 includes just the instrument profile, 8 includes character adjustments as well. I liked the warm and the bite presets, depending on the use. And with other instruments I'm sure I'd like different presets. I'm sure every person would have their own preference.

     

    The good news is, I'm getting closer to results I can use and I'm happy about that. I'm realizing one problem with how the instruments are programmed and how I use VIP. But I think resolving that, a bit more MIR work, owning MIR, and owning the entire VSL product line will eventually make me happy. Unfortunately that all can't happen soon enough. 😉

     

    -Sean