Hi Sam,
ad 1.: The good news first: The mutual interaction between individual Icons happens on a purely acoustical level, due to the typical amplification of certain frequency ranges, while other frequencies will cancel out or increase the perceived width according to the phase and runtime of the surrounding IRs. In other words: As long as you use identical settings for different instances of MIR in two independent VI Frames, the result will be the same as the sound you would get from a single instance.
ad 2: This one is trickier. While I consciously adhered to a conventional audio engineering nomenclature in the GUI, things behave very differently in the background, technically spoken. We would expect that there are indeed individual audio streams happening in real-time for every capsule which we can "solo" instantly by pressing a button. Things are much more demanding in MIR, though: Each virtual "capsule" is the result of 32 individual IRs*, which means that you could run into scenarios where the engine would have calculate up to 512** convolutions (!) in real-time for a single (!!) instrument in a full-blown 7.1 surround setup (... 128*** in case of a ordinary stereo setup with Main and Secondary Microphone.
... it goes without saying that this would easily overburden even the most advanced CPUs with very small arrangements. The MIR-engine introduced an extremely clever (but rarely talked about) technical solution to overcome this hurdle: "Positional pre-rendering" (... you can read up more details and even more numbers here -> https://www.vsl.info/en/manuals/mir-pro/think-mir#positional-ir ). In short: MIR pre-calculates an instrument-specific set of impulse responses in the moment you put an Icon on a certain position on a stage, taking into account all the individual parameters like chosen Output Formats, the instruments's Directivity Profiles, its Width, Rotation and so on. This set is really small, by comparison (e.g. only 4 IRs for a stereo instance). Like this, MIR Pro is able to run hundred and more Icons in (close to) real-time without busting your CPU. ;-)
This also means that there is no easy way to "solo" a capsule on the fly as all the pre-rendering calculations would have to be re-done. But I'm fully aware of the problem - which is why I implemented a workaround for those cases were we absolutely need to listen to the Secondary Mic in solo: At the bottom of the list of my Factory Presets for the Main Mic you find three "MUTE" entrances. Choosing one of those will of course trigger the re-rendering of all Icon's impulse responses, but when things are done you can in fact listen to the Secondary Microphone's output. ... just don't forget to save the settings you've made for the Main Mic before! ;-)
_____________________________
*) 8 directions from each position, recorded in 4-channel Ambisonics = 32
**) 32 IRs per "capsule", 8 capsules Main Mic, 8 capsules Secondary Mix = 512
***) 32 IRs per "capsule", 2 capsules Main Mic, 2 capsules Secondary Mix = 128