It all depends on the style of music you are writing, your realtime playing skills, the length of the work itself and the amount of time you have. But I find it ironic that on one hand technology is about to provide us with the largest sample library ever conceived, and yet some people are stating that the same technology will never be able to provide the means to use it realistically. Very little research and effort has been put into this particular area to date, but I do not agree that computers will not allow extremely convincing interpretations once it is. I believe that Sibelius, due to its intuitive, almost tactile working environment, represents the best hope for the eventual assimilation of a truly professional notation program and playback device.
It's not as gloomy as you make it seem, Gungnir. The problem is that an application designed primarily to be a scoring application is not going to be the best hope for musical interpretation because scores, due to their very nature, do not contain an ENORMOUS amount of performance information that we assume performers have. It would be like a word processor with a text to speech function reading poetry.
Now you could conceivably build many hidden layers on top of the notation level of something like Sibelius that could contain all the data needed massage what's down in black and white to actually realize a piece of music. I have a feeling, however, that if Sibelius is ever bloated up to actually do this, it is going to be a very cumbersome, unintuitive piece of software. Regardless, non-realtime tinkering with these aspects is always going to be more painful than what can be accomplished in real time by a performer (even if every line of a large orchestration needs to be played individually). A musical synergy is built up in a real time performance process that is going to be extremely difficult to accomplish in a non-real time way. Personally, I start out with a reduced guide track on a default instrument to establish tempo and basic feel. Then I use that guide in a fixed head graphic editing display window to visually "conduct" the performance (along with the audio). It's amazing how well and fast this works. At some point, I'll lose the guide audio, and I might go back and replace some of the early tracks once more of the orchestration is built up. In this sense, I'm rhythmically and dynamically playing with the orchestra, reacting and relating to all the other "players," hopefully playing in a stylistically correct manner. An enormous amount of intuition and on-the-fly editing comes into play. I can't begin to imagine how this could be accomplished in a non-real time environment with even a great sample library. To begin with, how would you accurately calibrate the dynamics of each solo and section instrument? This is something that's much better done by ear.
Maybe someday we'll have an application that can both score and musically realize the score. Personally, I feel Logic is much closer to that ideal right now. At this point in time, however, achieving *excellence* at either task requires the use of specialized tools. You might as well ask why humans, after thousands of years of civilization and tool making still don't have one single eating implement that simultaneously fulfills the function of knife, fork and spoon. I am open to someone or some company coming up with a solution that elegantly integrates scoring and performance, but it hasn't happened yet. Personally, at its present level of development, I would no more want to use Sibelius or Finale to perform music than I'd want to eat soup with a fork. [;)]
Lee Blaske