I don't think so. Macker wrote:
I'm using only the Analytical Dry simulated room, Focus at nominal and headphone 'correction' switched off."[/QUOTE]
/Dietz - Vienna Symphonic Library
194,606 users have contributed to 42,924 threads and 257,981 posts.
In the past 24 hours, we have 2 new thread(s), 8 new post(s) and 118 new user(s).
Dietz, thanks for your reply.
(I think we may have some nomenclature issues here - probably my fault.
I use the engineering jargon term "black box" to mean any piece of hardware or software for which the transfer function - or at least the 'functional processing' - between its inputs and outputs is not known or well defined. Also, I appreciate that the term "decoding" tends to mean something quite specific to many people wherever multi-speaker setups are involved. I should instead use the term "processing" to avoid any confusion with this popular usage. I was implying the question of whether or not there is any sort of intermediate ambisonics channel-processing prior to 'mapping' [i.e. 'decoding'] ambisonics channels to loudspeaker channels.)
You're saying that for this particular virtual mic there might perhaps be "an uncontrolled mixture of all [7 ambisonic] channels"? In other words, this mic is not subject to the same many-to-2 channel processing (or 'decoding") that appears to be obligatory for all other "stereo" virtual mics in MIR? Well, you call this mic "unique" in your little screed about it in MIR. (7 channels is, I believe, the channel count for horizontal-only 3rd order ambisonics virtual mics, whereas there are 16 ambisonics channels for full-sphere 3rd order virtual mics, both regardless of selected loudspeaker configuration). Your remark is indeed puzzling for me.
I guess you've answered my question about whether or not it's first obligatory to 'map' explicitly to a more-than-2 loudspeaker configuraration in order to get a full horizontal or spherical experience on headphones, by telling me to open MIR's Output Format editor - which appears to be for loudspeakers only.
Looks like I'll have to ask VSL Support to see if they can get a dev to spill some beans on this issue that's causing my puzzlement and deterring me from experimenting sensibly with my own speculative capsule configurations in MIR. As for the dearVR Monitor echo issue, I might ask their support crew - or I might not, given that I really don't want (nor, I hope, need) a simulation of loudspeakers in a simulated room, getting in the way of my headphone mixes.
Anyway, I appreciate your taking the time to reply.
As for the dearVR Monitor echo issue, I might ask their support crew - or I might not, given that I really don't want (nor, I hope, need) a simulation of loudspeakers in a simulated room, getting in the way of my headphone mixes.
Binauralization is not about loudspeaker simulation, it's first and foremost about the so-called head-related transfer function (HRTF, i.e. the very specific way our head determines spatial perception). Everything else, like room- and/or speaker simulation, head tracking, headphones profiles and so on are just add-ons. Let us put it straight and simple: If you want to make proper use of 3D audio, you need either 3D monitoring or binauralization for headphones.
Ok. Thanks to your detailed response I realise I must now do more experimenting in MIR, and read some more papers on Ambisonics (Graz paradigm mainly, but I've also picked up some interesting papers from Helsinki). No point in rushing in like a fool, lol.
I've been mixing binaurally for headphones since I bought the Logic Pro 9 Studio box set a long time ago; I'm now pretty familiar with that and its underlying concepts. However, I've also picked up a recent-ish paper on processing Ambisonic signals binaurally. Ahh what endless fun, lol.
The "black box" thing is relative. For example, what you treat as the clearly understood functionality of some particular subsystem or component, I might have to treat as a black box unless and until I can develop my own reasonably well-grounded understanding of it. Anyway, you've cleared up one part of my curiosity on the black box front: I had wondered if the Graz crew's contribution to MIR includes some UMPA-IEM 'confidential ingredients'; but you've said that all is open and clear from your standpoint. All good.
I must say I find your English is not only fluent but also of a high standard among native English speakers. No worries, I'll always endeavour to bear with you linguistically - and try to avoid letting myself fall into becoming a typical crusty and lazy old imperialist Brit, lolol.
Later.
The only "secret sauce" is the upscaling process the genius people at IEM Graz developed to raise our 1st order RoomPacks to 3rd Order (... they suggest 7th order, actually, but then it would be like back in the days when a Mac could process two or three instances of AltiVerb in real-time :-D ...). I know in principle what they did, and I know a little about the sonic possibilities as well as the potential pitfalls of their procedures, having helped them evaluate different approaches acoustically, but the math involved is _far_ beyond my comprehension. 8-/
.... an important sidenote to the occasional reader of our conversation: While some basic knowledge of Ambisonics can be quite helpful in better understanding MIR 3D, it is by no means a prerequisite. :-)
I have an outputFormat question. The manual talks about the mic array vs the loudspeaker sections, and I am wondering what happens when the faders are up on both sections...does that mean that the mic array results are summed together with the coefficients fader results (if and when coefficients exist)??
Also, another question, on the mic array section there is a slider called "Wet Volume Offset". What does that do exactly as an "offset"?
If you want to make proper use of 3D audio, you need either 3D monitoring or binauralization for headphones.
Rubens, FYI, I did not get very good results with DP11 and surround audio. It can not support 3D audio at all, only 2D...and I managed to crash it several times while attempting to do it. I cannot recommend DP at this time for surround work of any kind...and mind you that is my preferred DAW. I have more or less decided to just do my midi work in DP, and once I have that done, then I can bounce the tracks, take them over to another DAW with better surround support such as logicPro, cubase or even ProTools and mix it from there. Make sure to send feature requests to MOTU to add atmos support like everyone else. They are way behind the curve on that one.
That being said... The way to use dearVRmonitor in general is to put it on the master bus of a surround project. In DP that means you create a surround bundle, as 5.1. use that bundle as the output of the project, and all the instrument channels sending to that bundle. put MirPro3D on to each instrument channel after you have established they are surround channels rather then stereo. Make sure you're using 5.1 version of MP3D plugin each time. Use one of the 5.1 outputFormats in MirPro3D. On the master channel, put dearVRmonitor, which will convert the 6 channels of 5.1 audio into two channels of binaural. That's it. use it for monitoring through headphones, not so much for bouncing.
Also, another question, on the mic array section there is a slider called "Wet Volume Offset". What does that do exactly as an "offset"?
Dry and Wet Volume Offset allow for different gain settings after the decoding in respect to the capsule's Volume set by the top-most fader (0 means "no offset - unity gain"). I used this a lot in 1st Order surround setups to create some bias towards the front speakers for the dry signal. for example.
If you want to make proper use of 3D audio, you need either 3D monitoring or binauralization for headphones.
Like Dewdman42 I was under the impression that Digital Performer isn't 3D audio compatible. But in any case: A binauralization device like Dear VR Monitor should be used for monitoring (as the name implies). In Cubase/Nuendo there's a dedicated Control Room section for this task, and I created a setup that mimics this concept in ProTools, too. The routing principle is always the same: 3D (or surround, or stereo) input -> Binauralizer with the fitting input settings -> stereo monitoring output.
.... if you want to bounce this binauralized signal as a file, you would use the same approach in your master channel.
They are simply summed, yes. After all, they are just different methods to derive output from the same source. Several factory presets use this approach to strengthen the perceived position of the dry signals in an otherwise predominantly coefficient-based wet signal environment.
Dry and Wet Volume Offset allow for different gain settings after the decoding in respect to the capsule's Volume set by the top-most fader (0 means "no offset - unity gain"). I used this a lot in 1st Order surround setups to create some bias towards the front speakers for the dry signal. for example.
Thanks for that explanation! Makes sense.