An Ultimate Mixing Template for simulating classical recording & mixing environment, need help from Classical Recording engineers.....
Some years ago I touched the Virtual Orchestra, and for years I learned for mixing of it. After I got Altiverb and some other IR-based reverbs, I found it is necessary for me to create an Ultimate Mixing Template for everyone, and for myself. Also I want to share some knowledge I got from this forum and my own thoughts.
Before I start, I want to show some basic thoughts on this project. Although I haven't done any classical recording session, I searched for some information to get an image on this. This template's target, is to re-create a studio classical recording session which is physically correct by using original sample signals, IR-based reverbs and so on. Also this work will take the original sample recording's mic placement and many other original physical facts into consideration. Another target is to create a theory on simulating a classical recording session (Somehow, we can say "Based on Physics Fact"). In order to give the users a more free way to create their own virtual recording sessions. So I need help from classical recording engineers and mixers. And the Ultimate Goal, is to re-create "Reality"(The Final Target for Virtual Orchestra). And may be I need some information from VSL about its sample recording session, too.
So, Please help me. Thanks!
OK, now this is the project I created for now:
1.Study of a true classical recording session:
Session: "Symphony Gundam SEED Destiny" Recording Session in Abbey Road Studios, Performed by London Symphony OrchestraThis recording session used 48 microphones. The recording method can be viewed below. From it I learned that a fine classical recording cannot be done only by one or two mics, but this is the IR-based reverbs offered only nowadays. So I need to combine the different facts in the original sample and IR-based reverbs. That's why this project happened.
-----------------------------------------------------
| 1 Main L | 2 Main C | 3 Main R | 4 LL |
-----------------------------------------------------
| 5 RR | 6 Surr L | 7 Surr R | 8 Main2 L |
-----------------------------------------------------
| 9 Main2 R | 10 Vln1 F | 11 Vln1 R | 12 Vln2 F |
-----------------------------------------------------
| 13 Vln2 R | 14 Vla F | 15 Vla R | 16 Vcs F |
-----------------------------------------------------
| 17 VCs M | 18 VCs R | 19 KB | 20 KB |
-----------------------------------------------------
| 21 KB | 22 KB | 23 FL | 24 OB |
-----------------------------------------------------
| 25 KLB | 26 FA | 27 HO R | 28 HO R |
-----------------------------------------------------
| 29 HO R | 30 HO R | 31 HO F | 32 Tr1 |
-----------------------------------------------------
| 33 Tr2 | 34 PO1 | 35 PO2 | 36 TU |
-----------------------------------------------------
| 37 Perc L | 38 Perc R | 39 TMP L | 40 TMP R |
-----------------------------------------------------
| 41 KTr | 42 BsDrum | 43 Cymbal | 44 Glock |
-----------------------------------------------------
| 45 Xylo | 46 Gong | 47 Harp L | 48 Harp R |
-----------------------------------------------------
F = Front, R = Rear, Some Instruments used the standard VSL naming.
Based on the table, and some photos taken from the recording studio, I created the following image to describe the recording session: (Although there may be some mistakes...Just point them out and I will adjust them...Thanks!)
http://www.comicfishing.com/thbbsin/attachment/29_44_90d92dc468a11f9.jpg">http://www.comicfishing.com/thbbsin/attachment/29_44_90d92dc468a11f9.jpg
Then, Let's do some analysis:
2, Physical Signal Analysis:
(1). The mics for instrument or instrument groups are all directional. So when recording, we can consider that their signals are nearly the dry signals which are without Reverb(Just because of their physical nature. Mics are close, and they are
directional. I don't know if this is really correct, so I need recording engineers to tell me, Thanks!). So, to simulate these signals, just use original stereo Sample recordings.
(2). Thought on "lush" strings. From the beginning I am looking for "lush" strings, but when I got a example in Kontakt Forum that can turn solo violin into violins ensemble, I noticed that the fact for a "lush", or "large" strings, or, let's just say an "ensmeble" effect, is to layer themselves and do the right control of timing! Then I did some experiments. I found that when two same WAV played together, they will be phasing. But when you do some little delay(about 5-20 ms) in the timing of one of them, they will sound "lush". So I think, when a true recording is taken place, the different mic in different place will recieve the sound in different timing. For example, the close mic for 1st violins will get the sound first. Then, the Main L and R mics will get the sound some ms later. Although it's true that L mic will get the signal first, and R will get the sound later, this can be taken care by Altiverb or MIR.
And for the difference in L and R, I found that nowadays Altiverb's Stage Position function and MIR will get this correct very easily. So I will use them to simulate sound the outer mics get when recording.By doing this, the fact is that, in the real recording, the real world "uses" the delay method for creating a "lush" strings recording. And they get it not only because the timing, but also the reverb -- everyone knows that outer mic will get a reverb sound, just like be rendered by Direct, ER and tail IRs. Take them into consideration, I think that's why different recording method can get different sound.
In this recording, because this is a studio, the strings is not "lush" enough than some recordings made in Concert Halls -- Because when recording in concert halls, the mics can be placed in a more large space to get more "delay" on the string sound.
(3)About the stereo width. Because the analysis of (1), the close mic should get a full stereo image of violins ensemble. Becasue my lack of experience, I don't know how to process these recorded signals in the real classical music mixing. Did the engineers using two mono faders to get the correct pan of the instruments? Or I am wrong. The signal recorded are with placement info and is not full width? I have tried to use S-1 Stereo Imager to get a correct panning of the original dry signal and get it mixed. Can this method simulate the close mic sound?
There may be other problem hasn't been taken into consideration. So these thoeries can be changed.
But now, according what we got, let's get to the simluation:
3. Physical Signals Simulation:
Let's look at the recording session again. And also, look at what we can do using our tools:
(1)Close mics for every instrument or instrument group:
Using original recorded samples in VSL to simulate. Using S-1 to get correct stereo image. And for mono instruments, in order to get to the Ultimate Goal "Real", mixing down the stereo instruments to mono for mix them.
(2)Main L, R, C, Surr L, Surr R Mics:
The signals captured by these mics can be simulated by using IR-based reverbs. eg. Altiverb or upcoming MIR. This is what we can get from an IR-based reverb. But everyone can see that this is only a part of a fine classical recording.
(3)Main L2, R2, LL, RR Mics:
The signals captured by these mics also can be simulated by using IR-based reverbs.
Now we can simulate signals which can be captured in real world. But, wait, there is something else to consider.
(4)Simulate Delays in Real World:
Take the analysis in section 1, (2) into consideration. Although IR-based reverb can offer a real reverb sound, but all IRs are processed without initial delays. So we have to add them using simple delay effect.
Take Altiverb as example. The Source Placement function can be really useful in simulating the signals captured by Main L, R, C and Surr L, R mics. Consider that the close mics will get signal in 0ms(relative timing method), then the Main L and R mics, except the L and R difference(which will be taken care by the source placement function of Altiverb), they should have a delay in vertical direction. In this recording session, this delay will be very short (about 5-10 ms), that's why the strings sound not that "lush". Think of a recording taken place in a concert hall. 3 different mics have been placed in 6m, 12m, 28m. These delays will add a really nice "ensemble" effect to the strings. And don't forget, these sounds, captured by these mics, even if we take out the ER and tail in them, they are still different. Because their direct sound is different. In fact, by using this recording method, you get four different direct sounds of one string section. Mix them with different delays existed, you get a four times bigger strings[[[[:)]]]] (Just kidding).
So now we have ways, then let's get started:
4.The Mixing Template:
When get to create the mixing setup, there are some hints should be considered:
(1)Using IRs as close as possible to the environment we are simulating. But this "close" is not in Room properties or something. This "close" means recording placement and pattern of the IR.
But, when creating your own mixing setup, you can use the IR you like. And doing so, means you are creating your own recording session. [[[[:)]]]] Take this into consideration.
(2)Delays should be physically correct to get to the ultimate target "Real". So plan your recording session in heart or paper, and then calculate these delays out.
(3)The Mixing Signal Routing. Check your Mixer's routing to make sure you are using it correctly.
Now, this is a starting template for simulating a physically correct recording session:
http://www.comicfishing.com/thbbsin/attachment/29_44_29cfaf21838f8a0.jpg
Some Comments:
1. Altiverb's source placement require to add ER and Tail at the master or Bus. So that's why there are Altiverb Placement Mode and Altiverb Master Ins. Mode.
2. In Each Instrument Mic(s) track group, different Altiverb Modes simulate the signals captured by different "whole" mics(mics captures all the sound orchestra produced, not only an instrument or instrument group. Set up in different place) in real recording session.
3. Because different mode simulates different "whole" mic, it should have its own delay settings. Use the equation:
delay = distance (in meters) / 340 (Easy isn't it? [[[[:)]]]])
to get the correct delay for each setup.
Now, for simulating the session we can do this to the template:
1. There will be 3 submix buses for 3 different placement IR in Altiverb's master insert mode to simulate 3 different mic groups(Main, Main 2, LL & RR).
2. Input signal for each instrument or instrument group is not panned, not gained and have its original stereo width.
3. Create 3 Pre AUX Send for each instrument, and send 0.0db from original track.
4. Insert each AUX track an Altiverb, and setup the placement for each instrument to its correct position.
5. Insert each AUX track a delay effect, and move it to the above of Altiverb. Use equation to get each instruments delay and setup them.
6. Get AUX bus's output to its corresponding submix bus to get the correct ER and tail rendered.
7. Insert each wave track a S-1 Imager and setup its panning. (or use two faders to get correct panning?)
8. Adjust volumes, and get finally mix.
This project is just a try to simulate the real classical recording session. And give a way to try to do this in a physically correct way. Get the virtual and real connected using physical facts and some tricks. This project really need experienced recording engineers to get it more percision and "real". So, please help me to improve this work.
Thanks!
And sorry for my poor english [[[[:)]]]]
YWT
Some years ago I touched the Virtual Orchestra, and for years I learned for mixing of it. After I got Altiverb and some other IR-based reverbs, I found it is necessary for me to create an Ultimate Mixing Template for everyone, and for myself. Also I want to share some knowledge I got from this forum and my own thoughts.
Before I start, I want to show some basic thoughts on this project. Although I haven't done any classical recording session, I searched for some information to get an image on this. This template's target, is to re-create a studio classical recording session which is physically correct by using original sample signals, IR-based reverbs and so on. Also this work will take the original sample recording's mic placement and many other original physical facts into consideration. Another target is to create a theory on simulating a classical recording session (Somehow, we can say "Based on Physics Fact"). In order to give the users a more free way to create their own virtual recording sessions. So I need help from classical recording engineers and mixers. And the Ultimate Goal, is to re-create "Reality"(The Final Target for Virtual Orchestra). And may be I need some information from VSL about its sample recording session, too.
So, Please help me. Thanks!
OK, now this is the project I created for now:
1.Study of a true classical recording session:
Session: "Symphony Gundam SEED Destiny" Recording Session in Abbey Road Studios, Performed by London Symphony OrchestraThis recording session used 48 microphones. The recording method can be viewed below. From it I learned that a fine classical recording cannot be done only by one or two mics, but this is the IR-based reverbs offered only nowadays. So I need to combine the different facts in the original sample and IR-based reverbs. That's why this project happened.
-----------------------------------------------------
| 1 Main L | 2 Main C | 3 Main R | 4 LL |
-----------------------------------------------------
| 5 RR | 6 Surr L | 7 Surr R | 8 Main2 L |
-----------------------------------------------------
| 9 Main2 R | 10 Vln1 F | 11 Vln1 R | 12 Vln2 F |
-----------------------------------------------------
| 13 Vln2 R | 14 Vla F | 15 Vla R | 16 Vcs F |
-----------------------------------------------------
| 17 VCs M | 18 VCs R | 19 KB | 20 KB |
-----------------------------------------------------
| 21 KB | 22 KB | 23 FL | 24 OB |
-----------------------------------------------------
| 25 KLB | 26 FA | 27 HO R | 28 HO R |
-----------------------------------------------------
| 29 HO R | 30 HO R | 31 HO F | 32 Tr1 |
-----------------------------------------------------
| 33 Tr2 | 34 PO1 | 35 PO2 | 36 TU |
-----------------------------------------------------
| 37 Perc L | 38 Perc R | 39 TMP L | 40 TMP R |
-----------------------------------------------------
| 41 KTr | 42 BsDrum | 43 Cymbal | 44 Glock |
-----------------------------------------------------
| 45 Xylo | 46 Gong | 47 Harp L | 48 Harp R |
-----------------------------------------------------
F = Front, R = Rear, Some Instruments used the standard VSL naming.
Based on the table, and some photos taken from the recording studio, I created the following image to describe the recording session: (Although there may be some mistakes...Just point them out and I will adjust them...Thanks!)
http://www.comicfishing.com/thbbsin/attachment/29_44_90d92dc468a11f9.jpg">http://www.comicfishing.com/thbbsin/attachment/29_44_90d92dc468a11f9.jpg
Then, Let's do some analysis:
2, Physical Signal Analysis:
(1). The mics for instrument or instrument groups are all directional. So when recording, we can consider that their signals are nearly the dry signals which are without Reverb(Just because of their physical nature. Mics are close, and they are
directional. I don't know if this is really correct, so I need recording engineers to tell me, Thanks!). So, to simulate these signals, just use original stereo Sample recordings.
(2). Thought on "lush" strings. From the beginning I am looking for "lush" strings, but when I got a example in Kontakt Forum that can turn solo violin into violins ensemble, I noticed that the fact for a "lush", or "large" strings, or, let's just say an "ensmeble" effect, is to layer themselves and do the right control of timing! Then I did some experiments. I found that when two same WAV played together, they will be phasing. But when you do some little delay(about 5-20 ms) in the timing of one of them, they will sound "lush". So I think, when a true recording is taken place, the different mic in different place will recieve the sound in different timing. For example, the close mic for 1st violins will get the sound first. Then, the Main L and R mics will get the sound some ms later. Although it's true that L mic will get the signal first, and R will get the sound later, this can be taken care by Altiverb or MIR.
And for the difference in L and R, I found that nowadays Altiverb's Stage Position function and MIR will get this correct very easily. So I will use them to simulate sound the outer mics get when recording.By doing this, the fact is that, in the real recording, the real world "uses" the delay method for creating a "lush" strings recording. And they get it not only because the timing, but also the reverb -- everyone knows that outer mic will get a reverb sound, just like be rendered by Direct, ER and tail IRs. Take them into consideration, I think that's why different recording method can get different sound.
In this recording, because this is a studio, the strings is not "lush" enough than some recordings made in Concert Halls -- Because when recording in concert halls, the mics can be placed in a more large space to get more "delay" on the string sound.
(3)About the stereo width. Because the analysis of (1), the close mic should get a full stereo image of violins ensemble. Becasue my lack of experience, I don't know how to process these recorded signals in the real classical music mixing. Did the engineers using two mono faders to get the correct pan of the instruments? Or I am wrong. The signal recorded are with placement info and is not full width? I have tried to use S-1 Stereo Imager to get a correct panning of the original dry signal and get it mixed. Can this method simulate the close mic sound?
There may be other problem hasn't been taken into consideration. So these thoeries can be changed.
But now, according what we got, let's get to the simluation:
3. Physical Signals Simulation:
Let's look at the recording session again. And also, look at what we can do using our tools:
(1)Close mics for every instrument or instrument group:
Using original recorded samples in VSL to simulate. Using S-1 to get correct stereo image. And for mono instruments, in order to get to the Ultimate Goal "Real", mixing down the stereo instruments to mono for mix them.
(2)Main L, R, C, Surr L, Surr R Mics:
The signals captured by these mics can be simulated by using IR-based reverbs. eg. Altiverb or upcoming MIR. This is what we can get from an IR-based reverb. But everyone can see that this is only a part of a fine classical recording.
(3)Main L2, R2, LL, RR Mics:
The signals captured by these mics also can be simulated by using IR-based reverbs.
Now we can simulate signals which can be captured in real world. But, wait, there is something else to consider.
(4)Simulate Delays in Real World:
Take the analysis in section 1, (2) into consideration. Although IR-based reverb can offer a real reverb sound, but all IRs are processed without initial delays. So we have to add them using simple delay effect.
Take Altiverb as example. The Source Placement function can be really useful in simulating the signals captured by Main L, R, C and Surr L, R mics. Consider that the close mics will get signal in 0ms(relative timing method), then the Main L and R mics, except the L and R difference(which will be taken care by the source placement function of Altiverb), they should have a delay in vertical direction. In this recording session, this delay will be very short (about 5-10 ms), that's why the strings sound not that "lush". Think of a recording taken place in a concert hall. 3 different mics have been placed in 6m, 12m, 28m. These delays will add a really nice "ensemble" effect to the strings. And don't forget, these sounds, captured by these mics, even if we take out the ER and tail in them, they are still different. Because their direct sound is different. In fact, by using this recording method, you get four different direct sounds of one string section. Mix them with different delays existed, you get a four times bigger strings[[[[:)]]]] (Just kidding).
So now we have ways, then let's get started:
4.The Mixing Template:
When get to create the mixing setup, there are some hints should be considered:
(1)Using IRs as close as possible to the environment we are simulating. But this "close" is not in Room properties or something. This "close" means recording placement and pattern of the IR.
But, when creating your own mixing setup, you can use the IR you like. And doing so, means you are creating your own recording session. [[[[:)]]]] Take this into consideration.
(2)Delays should be physically correct to get to the ultimate target "Real". So plan your recording session in heart or paper, and then calculate these delays out.
(3)The Mixing Signal Routing. Check your Mixer's routing to make sure you are using it correctly.
Now, this is a starting template for simulating a physically correct recording session:
http://www.comicfishing.com/thbbsin/attachment/29_44_29cfaf21838f8a0.jpg
Some Comments:
1. Altiverb's source placement require to add ER and Tail at the master or Bus. So that's why there are Altiverb Placement Mode and Altiverb Master Ins. Mode.
2. In Each Instrument Mic(s) track group, different Altiverb Modes simulate the signals captured by different "whole" mics(mics captures all the sound orchestra produced, not only an instrument or instrument group. Set up in different place) in real recording session.
3. Because different mode simulates different "whole" mic, it should have its own delay settings. Use the equation:
delay = distance (in meters) / 340 (Easy isn't it? [[[[:)]]]])
to get the correct delay for each setup.
Now, for simulating the session we can do this to the template:
1. There will be 3 submix buses for 3 different placement IR in Altiverb's master insert mode to simulate 3 different mic groups(Main, Main 2, LL & RR).
2. Input signal for each instrument or instrument group is not panned, not gained and have its original stereo width.
3. Create 3 Pre AUX Send for each instrument, and send 0.0db from original track.
4. Insert each AUX track an Altiverb, and setup the placement for each instrument to its correct position.
5. Insert each AUX track a delay effect, and move it to the above of Altiverb. Use equation to get each instruments delay and setup them.
6. Get AUX bus's output to its corresponding submix bus to get the correct ER and tail rendered.
7. Insert each wave track a S-1 Imager and setup its panning. (or use two faders to get correct panning?)
8. Adjust volumes, and get finally mix.
This project is just a try to simulate the real classical recording session. And give a way to try to do this in a physically correct way. Get the virtual and real connected using physical facts and some tricks. This project really need experienced recording engineers to get it more percision and "real". So, please help me to improve this work.
Thanks!
And sorry for my poor english [[[[:)]]]]
YWT