Vienna Symphonic Library Forum
Forum Statistics

195,061 users have contributed to 42,962 threads and 258,121 posts.

In the past 24 hours, we have 9 new thread(s), 39 new post(s) and 75 new user(s).

  • Geert

    Dietz will probably be more of "the person to talk to". [[:)]] Being part of the team that developed this library will put anyone there [;)]


    Anyhow, I believe there are going to be many routes to get specific sounds out of VSL. I've learned over time that you can coax quite a bit out of all types of samples, with minimal tweaking, and just that tiny bit more with maximum tweaking [[:)]] This is of course, if you dont like the way VSL sounds on its own.....which is hard to do. The samples are extremely clean and "work", I think quite a few people aren't used to that.

  • Hi!

    It would be very interesting for me and maybe also to others, to exchange the experiences of some good reverb-parameters. Can anyone recommend any settings that sound very realistic or that maybe simulate a famous concert hall? I mean parameters like room size,predelay,decay,distance,...
    Does anyone use an additional delay and if yes, how do you use it?
    Thanks.

    Thomas

  • what I find is that it depends on specific reverb units/plug ins.

    Generally I use multiple reverbs for different "placement" in the room.

    Using reverbs with high early reflection volume, and shorter decay trails, to simulate the stage and placement.

    and reverbs with longer decays, much less early reflection data, and some EQ to emulate the "hall"

    I'll also put a reverb onto the horns usually, to simlate the "slap back", then putting both the dry signal and the "slap back" into the other reverbs. putting the slap back in "back" via pre fader adjsutments.

    One must understand that pre delay is an important factor. Usually long predelays occur with large halls. Generally 50ms is long, I tend to keep it around 35-40 unless the "slap back" isn't noticable enough in the horns.

    Dry signal to wet signal ratios are important as well, they, as well as EQs/filters help with placement of instruments "within" the emulated room.

    the less dry signal and the less high frequency data in the dry signal the farther you go back. On problem people have is actually removing high frequency data BEFORE you put it into the reverb, which is unatural, and why people get muffled performances. You need to consider WHAT signal you are EQing. Pre EQ/fader auxillary sends help you EQ and still get the sprkle you need.

    This doesn't mean you dont need to EQ the reverb send signal a bit, its jsut that it will be a little different than the EQ of the dry signal you want to keep.

    another thing to watch out for are "close mic'd" samples that have too much proximity effect. In which you may want to lower some low mids in the actual sample. Snare drums especially give me this problem.

    VSL's stuff isn't close mic'd tho, so you shouldn't have much of a problem.


    on a Side ntoe, I tend to lower some low mid's on my stuff in the final to help it from getting muddy.


    Have I confused you yet?

  • I donĀ“t know about him, but for me it was sort of an accelerated course on reverb, lol. [:D]

    I understood most of it though, and it was VERY informative (thanks). The only thing I coulndĀ“t get was the slap back and predelay stuff for the horns. That was chinese. [*-)]

  • Generally, Pre delay setting in reverbs is the setting you use to set how long it takes for the reverb to "sound", its especially noticeable with higher early reflection settings. What you essentially get is "delay time" like you would get in normal delay/tap delay effects. Its just the delayed signal is also reverb.

    Horns face away from the conductor. Essentially the sound you hear from horns is from the reverberation. The farther you move away from the front of the orchestra, towards "cheap seats", the more this becomes noticable, as you start to lose actual "placement" of the horns in the mix.

    Whats happening is you are hearing the sound of the horns playing towards the back wall, and bouncing off into all directions.

    Usually in orchestral recordings you hear the horn sound bounce back pretty quickly of the studio wall. Being that its usually not too far from the players in studio settings, you actually hear precise reflections from teh center of the back wall. Its really noticable on articulations like Stacatto at f/ff and anything that has a bit of power (or in softer sections and short articulations)

    Even with baffles set up in a real recording, I'd expec to hear the sound of the horns bounce off any back wall, since they are still pointing that direction. The only time it wouldn't happen is in overdubs in a small room, or if you had a stereo rig hanging not too far above the horns *ONLY*, and no overheads on the orchestra, and no other mics on the
    other instruments. Which is why I believe we always got that "direct stereo"
    As i mentioned before, I tend to but the horns through its own reverb setting with a pretty short decay, but enough predelay to hear defined early reflections, then I collapse the reverb's "wet" signal to near mono, to simulate the "back wall". Essentially creating a slap back. I tend ot EQ the reverb return as well, but its to taste.

    I'm sure Dietz has his own ideas too, he's pretty amazing at what he does. It'd be interesting to see what these guys do in terms of helping VSL users get more variety in their sounds. I know they are very forward thinking and aren't looking at this product as "its flexible its up to you to make it work for you", its more like "Its flexible and we want to help it work for you".

    All we can do is support them and feel confident that the end user is someone they intend to help.... and wait [;)]

    I still cant believe whats gone into this library. I'm a geek for samples, but these people are mad. The legato instruments alone take 23 times as long as it would take a normal sustain instrument to sample, not to mention the programming, and feature developments.

  • Thanks a lot for your answer.

    Multiple Reverbs? I heard that the standard of hollywood-filmscores should be to give the
    brass a small room/hall and strings a large room/hall.
    I think your suggestions are to simulate also the mic-positions.
    Outriggers for room sound, I think that need much of experience to simulate it with the other reverb-parameters of every group. I imagine
    to give every orchestra group a lot of direct signal and try to add carefully
    the "outrigger-simulation" with the more diffuse signal (dry/wet - in/out function, predelay-parameter,...) Of course EQ is always a good additional and indispensably "toy" to
    give the groups more back or forward - but to know the right level in connection with the effect-parameters [8-)]
    o.k. iĀ“ll try and try again what sounds the best way and if itĀ“ll be someday
    as good as I imagine (maybe not so good for other ears?) [:)] IĀ“ll describe it.

    Thomas

  • Don't forget that (dynamics-) compression is another powerfull tool to vary the impression of a true room. As a matter of fact, real acoustic environments tend to be unlinear above a certain point of loudness. Apart from that, my ears like the added density I'm able to avieve this way.

    ... in non-classical context, I even distort reverb-tails pretty often. It gets this incredible "ooommmph" like an old Led Zeppelin-record ;-]

    /Dietz - Vienna Symphonic Library

    /Dietz - Vienna Symphonic Library
  • this thread is going in the right direction [H]

    Thanks guys !! I wish I could add something useful myself, but my experience concerning classical reverberation is limited.
    If somebody wants to know something about certain DX or VST plugins, i'm be able to help I think [:)]

    Geert.

  • Dietz, haha, EXACTLY

    so I was right [:)] I wonder how much of that takes place in larger halls.

  • Though this has been a very technical discussion of equipment I'm sure I will never own due to not being a billionaire several times over, I'm wondering what Dietz thinks about this problem put in very non-technical terms -

    I think it is absolutely FORBIDDEN to use more than one type of reverb, because that is contrary to my entire goal of creating in the simplest way the natural sound of instruments within a space. The exact opposite of what I'm trying to do is found on pop recordings, where you hear strings with heavy artifical reverb tails while the singer is upfront and dry, etc. I am not talking about what engineers have or have not done on recordings, but rather the experience of a person sitting in a concert hall listening to orchestral music. If you tried to create that most directly, it would involved one reverb dry/wet level for everything, because the sound is actually coming from all the instruments at about the same ambient level at any one position. So in effect, you would have to use a rather wet amount on percussion, which is always in the back of the orchestra, but which is always traditionally mastered very dry. And the violins would have no more reverb than the percussion. But that is opposite of what is done on recordings, for good reason.

    What I've done is to cheat, and I'm wondering if that is basically your approach. You HAVE to use more reverb on strings, because psychologically they need more "space." And you HAVE to use less on percussion or harp or they become muddy. Likewise for other things, like solo woodwinds, etc. So I attempt to tweak only one reverb type very delicately and subtlely, and hopefully a listener thinks it is all one level of dry to wet, though it actually isn't.


    Is this something like what you do? Do you have any tips for someone with limited means. I am using a Lexicon externally because I've never heard any software reverb I liked.

  • yeah. Seeing how this thread went WAY over my knowledge in several parts I would also be glad to hear about ONE reverb which makes a good job with just tweakings here and there for different instruments.

    In my case IĀ“m more interested in a software way than a hardware way.

    IĀ“m aware that this simplified solution wonĀ“t give me the ultimate in realism, but hopefully will make it possible for me to slowly build up rhe experience I need, so later on I can try more advanced tricks.

    So to sum thngs up: If I was in a uninhabited isle with just one PC, VSL and ONE REVERB (it can be DX or VST) which one would you choose?

    Seeing how I asked more stuff and didnĀ“t reply to William, donĀ“t forget about his hadrware tips requests too...

    PS- in my isle I woud also have power supply, obviously [:)]

  • William,

    I know this was for Dietz, but I'd like to throw up a couple of things I see.

    I'm using mostly one reverb for most of my basic mixes and sometimes finals. If not then two instances of the same reverb with slightly different EQ settings, so I can maniupulate the "placement" much better. It works fine in most cases, and I dont do much more do to time I want to spend composing. (Tho I'm spedning time creating impulse responses from different reverb settings for myself, to make things go quicker). Still I dont consider my approach the best way to create a virtual room.

    One must remember that reverbs are still just simulations of ambient responses, and they will never account for the direction that sound travels according to specific instruments. The bells on brass instruments for instance, force specific frequencies to move in particular directions at louder volumes. This is why I always try and emulate "slap back" with the horns. You'lre NEVER get this effect without a sperate reverb or delay in my experience, unless you are using an impulse that was taken with monitors directed at the back wall (and I'm not even sure if that will work, but it should). I dont consider Impulse response technology "reverb", because its more than that. Its akin to what sampling is to synth sounds (impulse=sample, reverb=synth).

    Of course this all depends on the size of the room you are trying to emulate and the arrangement of the music [[[[:)]]]]

    I'm sure Dietz has his own thought, and thats whats amazing about discussions like this. We all hear different things. We can get techie all we want, but in the end its about what all of us hear, and its so different from person to person sometimes. There are somethings that we can focus on as a whole, tho [[[[:)]]]] Like, how the onboard reverbs on Keyboards generally suck [[[[:)]]]]


    Netvudu,

    I haven't found a software reverb I'm totally sold on, but combinations are pretty nice.. If you're looking for a "single solution" right now it will have to be hardware IMO.

    I'm even getting fine results with a Roland VS-1680 FX card's reverb. I only keep this unit because I like the reverb [[[[:)]]]]. Its a hard disk recorder that I bought quite some time ago. I now use it as an aux mixer of sorts The reverbs are great, but are easy to overload. However keeping the volume levels down on the sends, and cranking up the outs works really well.

    Lexicons are great too. You can get a better "stage" sound out of them, and they also have great decays.

    If you MUST use plug in verbs, I'm not sure. I like the ren verb sometimes, and hate it other times. It can really muddy up tracks, but using some multiband compression or jsut some good EQ can help it out tremendously.

  • Thanks, King, I couldn't put it better.

    /Dietz - Vienna Symphonic Library

    /Dietz - Vienna Symphonic Library
  • That's a great idea with the "slapback" on brass and you're exactly right about the sound being directed forcefullly by the bell, as opposed to the more passive movement of some other instruments.

    Thanks for the great tips.

  • last edited
    last edited

    @Netvudu said:


    So to sum thngs up: If I was in a uninhabited isle with just one PC, VSL and ONE REVERB (it can be DX or VST) which one would you choose?


    I do digital recording and mastering, and there are a couple software reverbs that have served me very well. One is Ultrafunk ReverbX. Check them out at www.ultrafunk.com. At $50, this utility is a steal. Comes with both VST and DX versions. Sounds great.
    The other one is the Timeworks Reverb. They're at www.sonictimeworks.com. I don't own they're reverb (I ended up going with Ultrafunk) but I downloaded the no-limitation-demo-for-14-days and it sounds incredible. Only comes in DX.
    So take your pick. These are the best, IMHO.

    ~Chris

  • thanks Chris. IĀ“ll check those.

  • I tried the VB Aphro reverb and it is VERY lush

    to check...

    http://vbaudio.jceinformatique.com/down_vs3_aphrov1.htm

    Geert.

  • Well I should be infamous by now for any people that have listened to my compositions that were on Northernsounds.com. The reverb that was on all of em was horrible, even though the compositions themselves were not all bad. I am using acoustic mirror, but it always sounds too wet or too dry. Also, how can you make it so you can route out channel directly through acoustic mirror?

    Thanks in advance

  • The best option is to record each section (or each instrument if you're real picky) to an individual track, then use an app like Vegas to do your mixing.

    You can then use Aux Sends to a specific Bus with acoustic mirror on it. This way you can mix and match dry signals to the west signals, as well as deal with panning. I'd also use some reverb (sound stage) on each channel with early reflection data to create a "stage", then aux send that signal to acoustic mirror.



    I'm not sure you got any of that.....since there are some general "mixing" terms there.

  • Ok well I understand pretty much all of that, except how do I import stuff like Cakewalk FX3 into GS so I can use it in realtime?