Honestly, I was really only thinking about output into stereo speakers. [...]
LOL [<:o)] ! Seems as I was using a sledgehammer to crack a nut. Sorry for that.
Kind regards,
/Dietz - Vienna Symphonic Library
196,245 users have contributed to 43,015 threads and 258,398 posts.
In the past 24 hours, we have 1 new thread(s), 7 new post(s) and 152 new user(s).
I applaud the implemanation of multi-channel surround options within MIR. I would like to see, however, a single industry standard for the speaker type and positioning for mixing and playback of multi-channel content. For a seamless 360 degree, holosonic, image specific soundfield, 5 to 7 identical (with and optional height channel), front radiating (non dipolar) full range speakers (at ear level) is the optimal approach. But if speaker positioning is one way when the music is mixed, and another upon playback, the multi channel mix be will a distortion from how it was initially concieved. That why I push to have a standard for for the playback of such recordings whether it be in film or music only. Does Vienna have a speaker positioning standard for the mixing of 5 or 7 discrete channels? And where do you position side/ side rear channels on the circular axis for your mixes? This all may be a little premature for the current offering of MIR, but I think it's good to open up the dialog on this.
Also, will there be more "hotspot" points within the various rooms to accomodate image specific surround envelopement options. I know this will take the music away from the "spectator sport" perspective, but I feel this coud open the floodgate to new and interesting sonic soundscapes.
(Note to myself: Maybe we should split this thread ...)
Right now, the surround mixes I do are done in a standard, quite precise ITU 5.1 speaker setup. I once tried a 6.1 setup (with a rear center speaker) just for testing. Also a "classical" Quadrophonic setup worked out nicely.
But during an earlier phase of the Vienna MIR development we had a loose collaboration with Iosono (a spin-off company of the Fraunhofer Institute in Germany). In their showroom studio in Illmenau I had the chance to do a proof-of-concept mix on a full-blown Wave Field Synthesis system. Now _that's_ the kind of surround we were dreaming of as youngsters! 8-) Simply breathtaking.
-> [URL]http://www.iosono-sound.com/technology/[/URL]
Looking at the screenshots of their control software, you will see that the two technologies fit like a glove:
-> [URL]http://www.iosono-sound.com/technology/hardware-and-software/[/URL]
What a pity that we won't see this approach in the average living room in the near future. :-/
I applaud the implemanation of multi-channel surround options within MIR. I would like to see, however, a single industry standard for the speaker type and positioning for mixing and playback of multi-channel content. For a seamless 360 degree, holosonic, image specific soundfield, 5 to 7 identical (with and optional height channel), front radiating (non dipolar) full range speakers (at ear level) is the optimal approach. But if speaker positioning is one way when the music is mixed, and another upon playback, the multi channel mix be will a distortion from how it was initially concieved.
Your suggestion is an excellent one for 7.1 systems. 5.1 already has ITU-R Recommendation BS.775-2 (07/06), which specifies speakers at 0°, ±30°, and ±110°, although it is unclear how many people actually have their speakers laid out this way. (Four of the speakers in a square seems to be more popular.) Ambisonics, however, is fundamentally different from 5.1 and 7.1, and does not need a standard speaker layout. This is one of its great advantages.
What is encoded in Ambisonics is not speaker feeds, but direction. When mixing in Ambisonics, the positions of the speakers are unknown and are of no interest. Further, when Ambisonics is decoded to speaker feeds all of the speakers cooperate to localise a sound in its correct position so, for example, when the speakers on the left push those on the right pull. The speakers all contribute to the creation of a single coherent soundfield.
For more information on Ambisonics, please see [url=http://www.wikipedia.org/wiki/Ambisonics]the Wikipedia page[/url].
I agree that there are new ways to handle samples, but I'm sure VSL will have some novelties in that field. They have all the audio they need. Maybe they will find new ways to chop it up, and manage it with their sampler.
As for the mic distance, what reverb are you using? And are you sending pre or post fader. IMO you need at least Altiverb to work well with VSL, and all the better if you have MIR. And with the features in those verbs, you should be able to control the distince well.
I have a feeling (I might be wrong) that close/distant micing options with samples is a feature of the past, since the advent of MIR. [Y]
Most orchestral recordings are actually a mixture of a main microphone system and spot microphones for added definition. This is exactly what you would achieve by using the room signal created by the MIR engine (i.e the main system) with the direct signals mixed in. Actually this concept was part of the Vienna Symphonic Library's samples from the very beginning.
Apart from that, Vienna MIR offers a dedicated "distant" Character Preset for most Vienna Instruments to give you even more options on the direct signal itself.
HTH,
How close are the Vienna instruments typically mic'd?
The issue of making a drier recording more wet isn't nearly as big a hurdle as making something mic'd close sound like its mic'd from a distance.
This depends. Micheal Hula, our musical director, would have all the details (which he woudn't share anyway ;-) ...), but I would say between 1 or 2 m for certain solo instruments up to 5 m and more in case of ensembles (which isn't even close to "close", in my book).
This is exactly what you would achieve by using the room signal created by the MIR engine (i.e the main system) with the direct signals mixed in.And in that case classical recordings would be made that way - why bother setting up the extra microphones then?
5m seems like it could be reasonable for a group, but one meter for a solo instrument seems way too close for an orchestral sound.
@Roger Noren said:
And in that case classical recordings would be made that way - why bother setting up the extra microphones then?
I don't understand this question, sorry.
What you wrote seems like a contradiction to me. First you say (which I agree upon) that classical recordings are made with balancing close and distant miking, and then you say a convolution reverb can control the depth placement just as good. Then I wonder why classical recordings are not made that way since it seems much easier. My point is that close and distant miking gives different characteristics, which not entirely could be simulated by the CR. In my opinion, the best way to simulate something would be to do it as close to the real thing as possible. I understand that users don't want to have any reverb sometimes, so the close and distant miking should be done in a fairly dry room. Then, by balancing these two, adding the reverb wanted, it should give a convincing result. The demos I've heard from VSL are very impressing in every aspect expect for the depth definition.@mike connelly said:
5m seems like it could be reasonable for a group, but one meter for a solo instrument seems way too close for an orchestral sound.
When you are recording real instruments in an orchestral setting it would be unlikely you'd go further away than 1m with the "spot" mic for your solo instrument. Apart from around this position sounding just fine (along with mixed in pick-up of the room and other orchestra mics) any further and the mic would tend to pick up additional instruments in preference to your solo.
Julian
@julian said:
When you are recording real instruments in an orchestral setting it would be unlikely you'd go further away than 1m with the "spot" mic for your solo instrument. Apart from around this position sounding just fine (along with mixed in pick-up of the room and other orchestra mics) any further and the mic would tend to pick up additional instruments in preference to your solo.Julian
I'd agree that 1m may be fine for a spot mic, but in that situation it is also blended with a more distant mic. In general, orchestral recordings sound most natural using mostly the main mics that capture the entire ensemble, and a small amount of spot mics (when they are needed).
I'm talking about recording with only one mic and having it at 1m, which seems to be the case with some VSL solo instruments (correct me if I misunderstood).
If you really wanted to recreate that means of recording a full orchestra, it seems like the best way to do it would be to use multiple mics (more distant mics and closer "spot mics") and let the user mix between the two. The difference in sound between close and distant micing is more than just verb and I doubt it can really be simulated well.
So, does MIR actually help improve the actual timbre of the currently available VSL strings libraries ? (i.e. offer rich, natural, and warm strings timbre) ? I personally don't think so.
Yes, MIR can improve the perceived spacial elements around the samples, but since the samples themselves do NOT have that rich, warm timbre I'm expecting to hear, I very much doubt any type of spacial treatment will solve this problem. Sadly, this is very much the case with all of the VSL strings audio demos I hear. They lack that natural, warm, and pleasing strings timbre that my ears have a big craving for.
If VSL thinks that MIR is the ultimate solution to improve the current VSL strings, I would love to hear a few demos that prove that this is actually possible. So far I'm NOT convinced.
@Roger Noren said:
And in that case classical recordings would be made that way - why bother setting up the extra microphones then?
I don't understand this question, sorry.
What you wrote seems like a contradiction to me. First you say (which I agree upon) that classical recordings are made with balancing close and distant miking, and then you say a convolution reverb can control the depth placement just as good. Then I wonder why classical recordings are not made that way since it seems much easier. My point is that close and distant miking gives different characteristics, which not entirely could be simulated by the CR. In my opinion, the best way to simulate something would be to do it as close to the real thing as possible. I understand that users don't want to have any reverb sometimes, so the close and distant miking should be done in a fairly dry room. Then, by balancing these two, adding the reverb wanted, it should give a convincing result. The demos I've heard from VSL are very impressing in every aspect expect for the depth definition.You didn't read my previous post properly (or it's a language thing - English isn't my mother tongue, as you may have guessed 😊 ...)
What the main microphone in an orchestral recording picks up is 90 or more percent room reflections. These reflections are coming from sources which are "dry" by definition (as an instrument isn't a room in our sense of the word - well, maybe with the exception of an organ).
This room signal is what MIR is all about. Mix in the dry signal to the "proper" amount - which is always more a question of aesthetics than pure science, to my experience - and you are right there. The feeling for "distance" is built-in.
Of course we are in the virtual world, so we have to deal with side-effects that wouldn't occur in reality, so all we can do is to betray the human ear as skillfully as pissible. 😊 MIR is still brandnew, so we yet have to gain mastership.
Kind regards,