Positioning (rather) dry samples on a virtual stage can be done with early reflections and EQ-ing.
I have started researching a concept for a VST-plugin that can handle this as an insert in individual input channels that receive single instruments (Tuba e.g) or instrument groups (French Horns, V1, etc).
It will take into account:
- position on the stage (using stereo panning of the full width input to match the rest of the processing)
- distance to the listener (virtual mics)
- the typical "radiation" of specific instruments (think FHs backward, Tuba upward, brass forward, etc)
- size of the stage, warmth of the venue, height and depth
It will be (if ever released! ;-) ) based on several impulse responses per instrument that are combined at runtime. The IRs are not based on real stages but are 3D models (of early reflections only) taking into account the aforementioned parameters. The IRs are not open for editing or loading, but built into the plugin as presets.
This project is currently in a proof-of-concept phase. So it is not yet even vapour ware :D