Vienna Symphonic Library Forum
Forum Statistics

183,736 users have contributed to 42,314 threads and 255,147 posts.

In the past 24 hours, we have 2 new thread(s), 8 new post(s) and 36 new user(s).

  • last edited
    last edited

    @nektarios said:

    ....Now that I got the new speakers, I may connect all my speakers to the new subwoofer. I also got some foams for my walls for sound absorbtion, but I think I will need more. 😊  

    This is my studio where I usually work:

    https://dl.dropboxusercontent.com/u/33556625/Images/2015-07-22%2021.34.04%20HDR.jpg

    Here is the piece I am working on -- the way I mastered it:

    https://dl.dropboxusercontent.com/u/33556625/Music/Nektarios%20-%20Eastern%20Dream%2034.mp3

    ....

    Hi Nektarios

    Thanks for all your info about your workplace and the piece you are working on.

    About your Workplace and Listening Situation

    It looks nice and it seem pleasant to work here. Seen from the acoustically point of view it is far away from an optimized workplace for mastering tasks. Type in "mastering studio" at Google and see images... Your monitors should stand free in the room on speaker stands some centimeters away from each wall. Currently your monitors are within a "box" of a furniture (80cm x 60 cm = a resonance 300 - 500 Hz?)

    Bass and Corners are enemies! Each corner blows up bass frequencies... So your Subwoover even if it is nice and new - should be placed anywher but not in a corner. And because the wave-length of 100Hz is 3.45m (50Hz = 6.9m) you can see that your actual acoustic treatment is useless for the subwoofer. You need to take " so called "bass traps" for deep tones... (see internet)

    The cheapest way for an improvement and a big step foreward is probably to take the monitors out of the furniture and to replace the subwoofer - with the next move?

     

    About your piece

    A) One problem is the bass. It comes within a huge reverbtail which makes it very difficult to get more pressure, more clarity... So I wood filter the tail of the used reverb, that it touches the bass not so much. 

    B) Your piece contains a sort of resonance (2.4kHz and others) which is annoying a bit.

    Listen to this short sequence. I tried to remove it but only some short times, so that you can see what I mean. Here an example a bit later (first 12s resonance removed as good as possible also with a bit removed bass by an dyn EQ - second 12s your original).Unfortunately you always remove too much in a final mix. So you should "remove" it within the mix where it is produced and not in the master. BTW the resonace is so complex that a simple EQ - as I used it - does not fix the problem. This resonance somehow sits in the whole mix as I can see.  Unfortunately I don't know what it is. A phaser? a ringing EQ?, Vitamin or another effect? the venue of...

    The mastering process beginns within the mix...

     

    All the best

    Beat


    - Tips & Tricks while using Samples of VSL.. see at: https://www.beat-kaufmann.com/vitutorials/ - Tutorial "Mixing an Orchestra": https://www.beat-kaufmann.com/mixing-an-orchestra/
  • Wow, thank you Beat!!

    Really appreciate your helpful advice and your great examples. It does sound better! Thanks so much for these examples!

    I just got myself two bass traps and one more foam but will keep adding. Not sure I can move the speakers for now. I calibrate my speakers to what I feel sounds pleasant. In other words, the subwoofer has a significantly lower volume in relation to the other speakers. I also put the crossover to 85Hz. What I hear coming out is actually pleasant. Curious how it will sound with the bass traps.

    Concerning my piece, it combines MIR PRO with MIRacle and used the preset to increase the tail of Grosser Saal. I also bumped up the seconds for the tail. Are you saying I should cut all low frequencies from algo the tail? Also the wet signal from MIR? How about just making the bass instruments mostly dry? 

    I do hear the resonance on the mix that you mentioned, and it does baffles me. Don't know where it's coming from! I have my suspicions though. I am using this iZotope Alloy 2 preset (for the brass), which uses an exciter. I will post a screenshot of my VE PRO screen later.

    Thanks again!


  • last edited
    last edited

    @nektarios said:

    ...I just got myself two bass traps and one more foam but will keep adding. Not sure I can move the speakers for now. I calibrate my speakers to what I feel sounds pleasant. In other words, the subwoofer has a significantly lower volume in relation to the other speakers. I also put the crossover to 85Hz. What I hear coming out is actually pleasant. Curious how it will sound with the bass traps...

    Hi Nektarios

    Of course you don't need to chang anything with your setup. Using Monitors in their best way means first that you keep away room- and other influences as much as you can. See this Image for the small monitors.If you are not able install your system in a better way it is not the end of the world. No problem. As I mentioned ... with the next move.

    If you those lully bass sound is within a long tail as well you will have quite no possibility to treat anything within the bass range. How to differ the bass and the rythmic bassdrum in such a bass soup? So as long as all the bass matters are not done you should not add a lot of reverb within the bass. Have a look here at Breeze-Reverb (just as an example) The blue EQ-Lowshelf curve filters the low frequencies with the tail so that mainly higher frequencies will come with the tail. So yes if you have the possibility try to do something similar with all the reverbs you are using... less tail below 150Hz... 200Hz I would say.

    Beat


    - Tips & Tricks while using Samples of VSL.. see at: https://www.beat-kaufmann.com/vitutorials/ - Tutorial "Mixing an Orchestra": https://www.beat-kaufmann.com/mixing-an-orchestra/
  •  

    Is there a VST plugin that will allow me to detect resonant frequencies reliably? I don't want to rely on my ears. Ideally, some tool that will visually tell me what frequencies are resonating. 

    I was thinking, if I know what frequencies are resonating, I can add a dynamic EQ in those frequency ranges to dampen them. Any thoughts?

    Thank you!

    -Nektarios


  • Lots of great advice in this thread already, but here's my 2 cents.

    1. Go to a mastering studio and master something, but first, make sure you find one that are ok with mastering digital only, and master in the software you use, and with plugins that you know. You can even request to use your own computer! Mastering is a buyer's market, unless you're going for top names, and there are many helpful people out there with great studios and great ears, that might accomodate such requests in a flexible way. That way you could bring your professional master home with you!

    2. If the loud parts are simply "too loud" then that means that the soft parts arent loud enough. The piece simply is still too dynamic. Because "too loud" just refers to relative levels. If thats not the case, you have to look into certain frequencies that may poke out in an unpleasant way as the orchestra gets louder. That may well be the case, but then thats not really dependant on how loud the piece is. If you really want an even volume throughout the piece though, dont try to achive that with compressors and limiters, starts first with volume automation. You'll be in a better place when its time to master. 

    It has been mentioned that "you shouldnt master classical stuff in that way", but that really depends on the usage. Sometimes you just need to have an even level, although it appears to be dynamic when it's not really. Games and even more so in commercials. That said, since your room isnt perfect, be vary of making those decision without some advice from other people in other rooms. Good luck!


  • last edited
    last edited

    @nektarios said:

     

    Is there a VST plugin that will allow me to detect resonant frequencies reliably? I don't want to rely on my ears. Ideally, some tool that will visually tell me what frequencies are resonating. 

    I was thinking, if I know what frequencies are resonating, I can add a dynamic EQ in those frequency ranges to dampen them. Any thoughts?

    Thank you!

    -Nektarios

    Well, any good analyzer, like the one that's included in the Vienna Suite, will reliably show you a graphical representation of the frequency range. It shows what's happening in the audio. But it obviously won't start flashing and sound an alarm when a piece starts sounding "bad" ... 😊 A plugin can only show you that there's so-and-so much loudness in this-and-that range, that's something that can be visualised of course. But a computer program can't "know" what sounds good or bad. That's something the artist has to judge.

    It's not the response you'd want to hear, but what you're asking for is exactly how not to do it. 😃 I mean, consider the humor of the statement: I don't want to rely on my ears. It's like a painter saying: I don't wanna rely on my eyes, or a cook stating that he'd rather not rely on his senses of smell and taste.

    It gets better and makes more and more sense the more you read, research and practice. But if you think you can't, or don't want to do that, there really is no other option than to have someone else mix and master your music. A technical shortcut really doesn't exist, it can't.

    And IMO, really just try to get better at mixing your own music. EQing stuff, making instruments sound good, setting levels, making instruments sit in the mix, not fighting or masking each other etc. I think you shouldn't really bother about the mastering thing. You can't really do it without a proper space, equipment and professional experience.


  • last edited
    last edited

    @Another User said:

    It has been mentioned that "you shouldnt master classical stuff in that way", but that really depends on the usage. Sometimes you just need to have an even level, although it appears to be dynamic when it's not really. Games and even more so in commercials. That said, since your room isnt perfect, be vary of making those decision without some advice from other people in other rooms. Good luck!

    I am with you here. I prefer even levels to be quite frank... I come from the electronic dance music genre where everything is squished level-wise and loudness is everything. It takes quite a bit getting used to mixing/mastering in the Orchestral genre, but I like it.

    Thanks again!


  • last edited
    last edited

    @JimmyHellfire said:

    Well, any good analyzer, like the one that's included in the Vienna Suite, will reliably show you a graphical representation of the frequency range. It shows what's happening in the audio. But it obviously won't start flashing and sound an alarm when a piece starts sounding "bad" ... 😊 A plugin can only show you that there's so-and-so much loudness in this-and-that range, that's something that can be visualised of course. But a computer program can't "know" what sounds good or bad. That's something the artist has to judge.

    It's not the response you'd want to hear, but what you're asking for is exactly how not to do it. 😃 I mean, consider the humor of the statement: I don't want to rely on my ears. It's like a painter saying: I don't wanna rely on my eyes, or a cook stating that he'd rather not rely on his senses of smell and taste.

    It gets better and makes more and more sense the more you read, research and practice. But if you think you can't, or don't want to do that, there really is no other option than to have someone else mix and master your music. A technical shortcut really doesn't exist, it can't.

    And IMO, really just try to get better at mixing your own music. EQing stuff, making instruments sound good, setting levels, making instruments sit in the mix, not fighting or masking each other etc. I think you shouldn't really bother about the mastering thing. You can't really do it without a proper space, equipment and professional experience.

    I can understand what you mean. Let me explain where I come from. As an artist, I am mostly a visual artist (I am an oil painter). That is why I compose visually -- I work directly on the midi editor, and I can't compose by playing instruments directly. As for mixing, this is why I love MIR PRO because it's a visual/natural way of thinking of my compositions and mixes. Honestly, I can't go back to the old way of mixing now -- it's very difficult. 

    On the other hand, one thing I do know accoustically is if something sounds good or bad, too loud or too soft, if there is too much low/high frequencies, etc. *But* my ears cannot pickup subtle things, such as, if there is resonance in this frequency range etc.. All I know is this sounds bad, which includes the piece I posted. So when I know this, I'd love a tool that will give me a visual feedback as to why it sounds bad...


  • last edited
    last edited

    @nektarios said:

    On the other hand, one thing I do know accoustically is if something sounds good or bad, too loud or too soft, if there is too much low/high frequencies, etc. *But* my ears cannot pickup subtle things, such as, if there is resonance in this frequency range etc.. All I know is this sounds bad, which includes the piece I posted. So when I know this, I'd love a tool that will give me a visual feedback as to why it sounds bad...

    That's impossible - the program can not know what you don't like, and why you don't like it. There are no "bad" frequencies in an objective sense. There is just listening habits and conditioning. A program can show you a graph of the frequency response in a visual way, but how is it supposed to guess what you, or me, or a third person, like or dislike about it? It may be completely different, and frankly, it is, all the time!

    For example, I dislike the excessive "oomph" of snare drum sounds that are very popular today. It seems to be very popular to hype the "body" of the drum and not have a lot of the crack of the actual snares. To me, it sounds like a compressed recording of someone beating a shoe box. But a lot of people seem to think it's great, because it's being done a lot on records. Now who's right? Can we ask the plug-in? Is blue a bad color? Or do you just dislike the amount of white I'm mixing it with, or is it just my overuse of blue that makes it so unpleasantly striking?

    f you can tell a difference acoustically, between what you like and what you don't like, and can perhaps even pinpoint it to highs or lows, that's a good start. You just need to refine this ability. That comes with practice. In the beginning, all you can say is "somehow it sounds too painful". Later, you're able to say "it's because the high frequencies are too harsh". Yet later on, you are able to pinpoint it to a certain range, because through experience, you learn to associate highs around 6500 Hz with a completely different quality and sonic effect than those around 12k.

    It's like muscles. The more regularly you train it, the better it develops. The goal is to be able to attribute effects and sound aesthetics to frequencies. And when we combine that with what we know about the construction and materials of the instruments, and the way they're played, it all makes sense.
    To look for resonances that might be problematic, do the sweeping technique. Grab an EQ band, set it to +20 dB, and drag it slowly across the whole spectrum while the track is looping. You'll hear the ugly ones once you've swept over them. Here's a trick: try to sing them! Imitate their sound. Like imitating animals. You might sound silly and hopefully, there's nobody in the room while you do it, but just try it 😃 Concentrate not on the whole sound, or the track, but just on that resonance that you exposed by heavily pushing the band. Internalize the sound by imitating it.

    Now reduce the EQ band again, slowly, towards 0. but keep imitating the resonance, or at least remembering its peculiar ring in your head ... notice the difference? Even if your EQ band isn't over-exaggerated to +20 dB any more - you can still hear a trace of the resonance in the sound! It was there all the time. But you never heard it consciously, because you weren't looking for it. It's like those little drawings where it says: what's wrong in this picture? If you didn't knew that you're supposed to look for something, you wouldn't even realize that they hid some stupid shit in there.

    And now you can try and reduce the EQ band below zero. And listen closely. The resonance that you focused on so strongly - when does it start to fade, and become more difficult to still hear in the overall sound?

    That's how you learn it. The more you do that, the more tracks you mix, the more sensible to these things your hearing becomes. And then of course, graphic analyzers can come in handy. Because you already assume that there must be something in this-and-that range, maybe it's 1400, but could also be 1800, not sure ... and if there's really a lot of something in those ranges, the graph will show. The difference is: you know where to look at.


  • Woow, very insightful and helpful! Perfectly said! Thank you!!!


  • Managed to get some dramatic improvements. What I did is add a "Dynamic EQ" on all those resonant selections given by VSL in each instrument. Works wonderfully. I also added a low cut on MIRacle and reduced the exaggerated lows on my matching EQ. Played it on my car, and it's so much better!

    With the use of a dynamic eq, reductions happen only when higher velocities play. Love it! Anyway, still improving, and and not there yet.


  • This is an interesting thread because one gets to hear Dietz's ideas and "soapbox."  His mixes are the best of any sample performances I've ever heard.  So I love it when he reveals some of his brilliant concepts on these forums. 

    And mastering is very complex. What I am trying to do is deal with the great advantage that samples give one, compared to live performance, and do things during the actual performance which effect mastering.  For example, reverb levels, individual instrument EQ, and overall hall sound which might be dealt with in mastering  - but not as effectively after the final mix is done.  I think you could even say that orchestration has a huge effect on mastering.  Why?  An example is the sound of a bass drum.  If  you listen to an orchestral bass drum at a concert, you will immediately notice it has an extremely deep, powerful bass that is deeper in fact than any other instrument on the stage.  But if you listen to a sample orchestra, it is just another  bass frequency, along with contra bassoon, cellos, bass clarinet, tuba, etc.  They are all present in their full audio spectrums because that is the goal of audio recording. 

    Also, if you listen more at that same live concert, you will notice that the entire woodwind section has almost no bass, because the actual bass instruments present - whether bassoon or contrabassoon - simply do not have as much amplitude in that frequency range especially from a normal listening distance.  Another example is the infamous high frequency sound of sample violins.  If you listen to violins live, you will notice they are far darker than any sample recording.  That is because the higher frequencies simply do not have as much power as the mid range of that instrument.  That mid range is the main area of its audible projection out into an audience.

    So my point is that when using samples, many aspects of mastering can actually start with the performance itself.  Which is a good thing because one has so much control over every parameter of sound conceivable.


  • last edited
    last edited

    @William said:

    I think you could even say that orchestration has a huge effect on mastering.
    120% true!

    /Dietz - Vienna Symphonic Library
  • last edited
    last edited

    @William said:

    This is an interesting thread because one gets to hear Dietz's ideas and "soapbox."  His mixes are the best of any sample performances I've ever heard.  So I love it when he reveals some of his brilliant concepts on these forums. 

    And mastering is very complex. What I am trying to do is deal with the great advantage that samples give one, compared to live performance, and do things during the actual performance which effect mastering.  For example, reverb levels, individual instrument EQ, and overall hall sound which might be dealt with in mastering  - but not as effectively after the final mix is done.  I think you could even say that orchestration has a huge effect on mastering.  Why?  An example is the sound of a bass drum.  If  you listen to an orchestral bass drum at a concert, you will immediately notice it has an extremely deep, powerful bass that is deeper in fact than any other instrument on the stage.  But if you listen to a sample orchestra, it is just another  bass frequency, along with contra bassoon, cellos, bass clarinet, tuba, etc.  They are all present in their full audio spectrums because that is the goal of audio recording. 

    Also, if you listen more at that same live concert, you will notice that the entire woodwind section has almost no bass, because the actual bass instruments present - whether bassoon or contrabassoon - simply do not have as much amplitude in that frequency range especially from a normal listening distance.  Another example is the infamous high frequency sound of sample violins.  If you listen to violins live, you will notice they are far darker than any sample recording.  That is because the higher frequencies simply do not have as much power as the mid range of that instrument.  That mid range is the main area of its audible projection out into an audience.

    So my point is that when using samples, many aspects of mastering can actually start with the performance itself.  Which is a good thing because one has so much control over every parameter of sound conceivable.

    Thank you William. You are absolutely right. I try to keep the live performance in mind when mixing, which is why I start not to feel comfortable when I begin using plugins like compressors etc..

    In general, if I go to a live performance and sit close to the front, it will simply sound great: all frequencies blend in nicely, and in general, it sounds perfect. Now, suppose I take with me a digital recorder, and record the live performance. When I go home, and play it back, it won't sound as good as the live performance obviously. Most likely perceived "loudness" will need to be increased so would generally need some "work" to sound great. So, to me, mastering, gives it this pleasant feel. So I always wondered, how can I increase perceived loudness without killing dynamics. 


  • last edited
    last edited

    @William said:

    This is an interesting thread because one gets to hear Dietz's ideas and "soapbox."  His mixes are the best of any sample performances I've ever heard.  So I love it when he reveals some of his brilliant concepts on these forums. 

    Dietz, a few weeks ago, I tried private messaging you because I wanted to hire you so you can mix/master my stuff. ðŸ˜‡ Your inbox was full so I couldn't send anything... 


  •  

    It's interesting, as I am reading some orcherstration books, a lot of the concepts they go over is mixing/mastering concepts. It's like hundreds of years of refinement of how to have different instruments play together nicely.

    William is totally correct. When I listen to violins live, I love the "darker" sound they output. So to really imitate this in MIR PRO/VE PRO, I'd have to cut a bit out of the violin's high frequencies.

    I wish MIR PRO had an auto EQ concept based on live performances. So if I place my 8 dimension strings violons in a venue, they would be automatically EQed based on how it would sound in a live performance. Now, I just do this adjustment myself.


  • Have you ever tried the individual Character EQs MIR Pro supplies for almost every Vienna Instrument? That's several thousands of tailor-made EQ settings available at the tip of your finger. I think I won't go further than that. 8-) Kind regards,

    /Dietz - Vienna Symphonic Library
  • last edited
    last edited

    @Dietz said:

    Have you ever tried the individual Character EQs MIR Pro supplies for almost every Vienna Instrument? That's several thousands of tailor-made EQ settings available at the tip of your finger. I think I won't go further than that. 8-)

    Kind regards,

    Oh yes! That is awesome! But I have no idea what the eq curve looks like for those character presets....  ðŸ˜Š 

    I even can't access the character eq... Can you please make this available for the users? I know the character EQ is exposed to a few people...


  • last edited
    last edited

    @Dietz said:

    Have you ever tried the individual Character EQs MIR Pro supplies for almost every Vienna Instrument? That's several thousands of tailor-made EQ settings available at the tip of your finger. I think I won't go further than that. 8-)

    Kind regards,

    Oh yes! That is awesome! But I have no idea what the eq curve looks like for those character presets....  ðŸ˜Š 

    I even can't access the character eq... Can you please make this available for the users? I know the character EQ is exposed to a few people...

    Those "few people" are actually two guys on this planet: MIR Pro's main software engineer Florian Walter and me. ;-D I doubt that these settings will be made freely accessible any time soon. I think that it is a good approach to use one's ears every now and then (rather than looking at a graphic representation of some audio processes). And apart from that there's so much work involved that I hope that you will understand that we won't give them away for free. Call it "trade secret" ... 😉 ... but I seem to remember that I included some kind of "MIR Instrument Character typology" in MIR Pro's manual, so you can look up what to expect approximatively. Kind regards,

    /Dietz - Vienna Symphonic Library
  • I should add that the character presets are a huge resource in MIR, one of the best things about it.  Especially considering how one can use a slightly different color for a 2nd player or group.  An example is setting your second violins to "dark" in order to make the 1st sound out more.  Or many other variations.   Using the "bite" variations in basses helps to make rhythms much clearer.  ALso, I have started to think that it should almost be normal practice to use "clean low ends" or "clean mids" for various instruments such as harp and cello.  Those presets represent an expertly tweaked example of the basic thing I was talking about with deep bass notes that certain instruments produce (such as a cello section) not being present hardly at all in normal orchestral settings, due to many factors but mainly 1) number of players, 2) resonance of the instruments involved 3) characteristics of amplitude in the particular range the instrument plays. 

    Also, the contrast of these levels throughout different instrument groups has a huge effect.  If you are recording a chamber group, you will hear a huge amount of bass from low strings.  If you are recording an R. Strauss sized orchestra playing "Also Sprach Zarathustra" you will hear almost NO BASS from those same instruments.  Because the recording is adjusting for the massive brass and percussion that almost obliterates the strings at times.  These factors are crucial to bear in mind during both mixing and mastering with samples, because there is absolutely no adjustment made for these automatically, quite understandably because of the goal of any sample recording  to absolutely suck dry any instrument in front of the microphones and capture every nuance.