I wanted to say this is an important project in general, to bring to light this fine composer. I am amazed by how this great piece is not already in the orchestral repertoire. Also, this is one of the things that is so good about the VSL Forum - one actually encounters things here that you simply don't find elsewhere.
I cannot do too much time on this but am listening when I can. ALso, since it is all in separate MIDI tracks that is great, because any changes can be made very practically. There are some things that can be done in general, divorced from any specific timings, and I can give those right now.
One thing I noticed is the overall mix sound is very dry, which increases the artificial effect. As was pointed out, using MIR if you can do so would be a great thing since it allows you to do fantastic sounding mixes without having to do a huge amount of tweaking. MIR tends to make almost any sound very realistic, with no effort, and so all the effort spent on the huge amont of music in this could benefit from that.
Also, I feel strongly that there are too few articulations used in general. This is of course the most basic aspect of doing a MIDI performance - selecting articulations. It is how one creates the basic sense of realism as well as expressiveness. And so, it is up to the performer to decide exactly what he likes. This involves a HUGE amount of work, because of the vast number of possibilites that VSL has created. You could take one line of music and do it literally a hundred different ways with all the articulations that are available.
So that is why I was originally say, it is very important to take this piece and listen to EACH TRACK SOLO. And then ask oneself, is that one starting note just a sustain, or is it a sforzando - and even more, the particular sforzando that has been recorded for this instrument. All of them vary, based upon the performance practices, and how they were recorded, the player who recorded it, etc. So this is where AUDITIONING the samples available is so crucial. What I am saying here is that there is no magic instant solution for this aspect - it is simply hard work figuring out exactly what is the best selection for the articulations of each line. But you can do that, because you very correctly have each instrument separate which is huge for making this possible.
There is one general fix for instruments sounding too "computerized" for lack of a better term. On a piece like this, you can take each section - for example, the flute section, which might consist of three players. On first, you can keep it at quantized value. On second and third you should apply different humanize presets of anywhere from 10 to 25 % randomness. Then, listen to each track separately, and see if that humanize resulted in any problems, such as overlapping notes, cut off of note ends, or objectionable rhythms, etc. If each one sounds o.k., then you can route each of those tracks to separate instrument sets. Such as 1st flute, 2nd flute, piccolo, or whatever you are using. If you have to do some transpose/pitch-shifting, which you might on clarinet or others depending on what collection you are using, then each one of those would be an individual instrument. The point is, to make each track totally separate from the others, and make is sound good ON ITS OWN. If it does, then when they are blended, with the humanize effect combined with one quantized track, it will sound very natural.
One thing on this use of one quantized track and other humanized tracks, it is important to determine which ones you apply these to. For example, I have often used a very strictly accurate quantized (string) bass part, since it underlies the entire orchestra and forms a rhythmic basis for everything else. And also - importantly - it does not exactly double any other instrument. That is important because, if you apply quantize (or leave the notation file intact without humanize changes) on instruments across orchestral groups such as flute 1, violins 1 oboe 1, even trumpet 1 if it happens to be playing - if these instruments which often play in unison in a piece in a tutti are all quantized, then again you will start having that artificial sounding "too perfect" rhythm that instantly makes the ensemble sound fake. So it is good to use the humanize on each instrument that would likely be playing the same rhythm, even across sections thoughout the orchestra, and only use quantize on those that will be playing very different parts. Anyway, that is a general principle you could apply.
Those are some few general things I thought of that could actually accomplish a lot in making this more realistic, but are - I admit - based upon doing a fair amount of work in auditioning each track separately and applying the appropriate articulations. That is the main thing. It is not at all strange to have a change of articulation on EVERY SINGLE NOTE of a line, because even though many might seem generally applicable - such as "sustain" or "legato" - often in actual orchestral practice there is a huge amount of variation in articulations the player is automatically applying based upon his years of mastery of his own instrument. Also, dynamics - a composer will mark an entire section "f" or "pp" but if you acoustically analyze the individual part performance, you will hear constant crescndi, diminuendo, accent, etc. Becuase it is the musical phrasing implied by the content of the line. So the great difficulty one faces is in trying to match that vast amount of variability in all the lines of the orchestra. Something which I believe actually dwarfs what a conductor has to do, who relies upon his players doing all that, automatically, for him. Someone doing MIDI does not have that luxury.
I realize you probably know this, being a good musician, but I am stating it to give an idea of what I think needs more work.
A few other things are - what sample set are you using? On this piece, you need the entire symphonic cube because it is a heavy-duty piece of music. Also, do you have this in a standard MIDI file and are you using Vienna Ensemble Pro or what? Also, are you using the VSL presets for instruments, or did you create your own custom presets? ARe they selected by keyswitches? I can't do much more unless I had the actual MIDI file in front of me with the VE file, and then I could make further suggestions by examining exactlywhat the keyswitches are doing, as well as the controllers.
However, whatever the case this is a great project and I do admire your undertaking it in a serious way!