Vienna Symphonic Library Forum
Forum Statistics

196,053 users have contributed to 43,014 threads and 258,388 posts.

In the past 24 hours, we have 2 new thread(s), 18 new post(s) and 144 new user(s).

  • Can I load all samples of, say, EW Stormdrum II into one instance of Play and manipulate it via a matrix or rather set of matrices of my choosing, as with VI/VIP? If not, which seems to be my early experience, then Play blows. I'm not running 17 instances of the same sampler to make one small collection play, regardless of what else it may offer. If I had to load more than one instance of my Vi section, or Vi2 section, I'd feel the same way.

    I love VI/VIP. Maybe the lack of human you speak of is more about being a human and writing, rather than relying on software to make it human. I only want playability. I'd never ask for more. You find it a nuisance to adjust between two notes? Have you ever played an instrument? That is often a huge journey, to make it repeatable, especially live. I don't want 128 tracks for 32 instruments. The rest is up to me. If you need to do it in software in the first place, you can't preach, that's how I see it. The rest, write it and let someone else deal with it, if someone ever buys it, if that's your goal. I'd rather write music, slowly, till it's me and I like it, without the constraints of someone's programming. If that wasn't true, I'd have bought EW for orchestral things, much cheaper if you don't mind sounding like Joe Blow, and then I'd be complaining about how crappy Play is; maybe it's good, but it's certainly not intuitive, which VSL software is. Who needs behind the scenes when everything is in front of you?


  • last edited
    last edited

    @Another User said:

    Who needs behind the scenes when everything is in front of you?

    You're right... lets get rid of the current humanize feature also... I mean, technically it's already "in front of you" isn't it? Use a pitch wheel or go into cubase and draw every last little pitch imprefection on every track using pitch bend because that would just save loads of time. My point is valid. My suggestions are valid. You may disagree with them, and you might not find them neccesary... but that doens't negate in any way that there are those who would benefit from them. This thread was to suggest a feature. If you have a better idea, I'd be happy to listen, but just because you don't like someone elses, which certainly wouldn't affect your current paradigm or workflow... I see nothing constructive in what you said. Perhaps I'm wrong, please enlighten me. [:D]

    -Sean


  • No, you're right, I said nothing constructive. I shouldn't have posted. I occasionally get caught up in the negativity, and open my mouth when I shouldn't. I've read so many posts lately that are anti-everything, pro-everything, and I don't just mean here.

    I'm willing to spend as long as it takes to get a note to sound how I want, but I realize that's just my way of trying to do things. Some people don't have that luxury. And I just spent the farm to try and do just that, so I'm very excited, and nervous, and frankly a bit touchy LOL. Will feel much better when I figure all this gear out:)

    And my comment about people playing real instruments was obviously off. Many of you are certainly better performers than I'll ever be, and the fact that I only play guitar with any kind of competency, well...

    But that sort of makes me realize why I said some of these things in the first place. I very often, just jamming for example, will play my guitar into an audio track (the real guitar) and double it (via my GI-20 midi thingie) with a sample. They are NEVER even close, the sample is dead compared to the live instrument. But do I want a few people who think 'this is how to fix it' solving all of my transitions for me? No, I can't imagine that.

    I think you have valid points especially for someone who is making a living at it, is time-bound etc., but for the most part I can't - personally- see myself ever wanting things to get too, too automatic. 

    Case in point: VST Expression, after trying to wrap my head around it for a couple of weeks, loading VSL's templates etc., and finally seeing how it works, I came to the conclusion that: why bother? I had no problem with VI (before I bought Pro) using 4-5 lanes in Cubase because it was touchable: go into VI, choose your controller, and spend every second in a song paying attention to the details. I'm not saying VST Expression isn't touchable, but unless you want to spend WAAAY too much time writing code, lanes are easier, and they are visual/tactile, giving me much more control. But ask me how I feel if I ever start writing in the Scoring pane, I'll probably deny all this lol!

    Less auto-button pushing, more listening, more decision making. Slower? A lot. But I don't ever think I'll be trying to get my music into movies...

    Sorry again, I came on half-caulked last night. My mistake.

    And FWIW I love the humanize feature in VIP. It allows my samples to play as out of tune as I do in real life:)

    Shawn


  • Oh I was a smart-alec in my reply so it's all good.

    I guess I need to re-word one point. I definately don't want automatic performance in one way and I do in another. The humanize timing idea I mentioned would certainly be automatic. Although I think it's not absolutely essential, just like the current humanize, but that it certain would help in realism without manually adjusting all the start times of each pitch...

    The thing I don't want automatic is fine-tuning, which I think people misunderstood. I want a very specific sound out of "my orchestra" so I fine tune and I will also take as long as it needs. Any artificial 'clues' revealing the use of samples, I try to get rid of as much as I can... All I meant was that DURING the fine tuning process I would like features that will save as much time and work as possible in getting to that fine-tuned 'right sound'. I doubt anyone would disagree with that, I simply didn't word it well. How to save the amount of fine-tuning time it takes... I am not sure. I gave the crossfade idea, but ultimately just for the goal I just mentioned. I really want fine-tuning time savers more than anything but without diminishing the level of control I have over VSL.

    -Sean


  • Well I’m not sure this is relative to VIP or relative to each individual library, and how the individual samples are edited.

    I loaded up some Appassionata violins and alternated between different patches all playing the note E4, and noted the levels in db(RMS and Peak in a Logic Multimeter). 1st at Velocity X-fade 20, then at 110. Patches used are

    1 Sus Vib Strong

    2 Perf Legato 4 velocities

    3 Stacc

    4 détaché

    5 Pizz.

    6 Trem

    Vel X-fade 20

    1 (sus) -22db

    2 (leg) -17db

    3 (stacc) -17b

    4 -19

    5 -25

    6 -20

    Vel X-fade 110

    1 -25db

    2 -26

    3 -15

    4 -24

    5 -28

    At Vel X-fade 110, when I go from 2 short notes (stacc then detache) the second is 9 db louder. I really can’t believe my ears. This really isn’t acceptable (or am I missing something?). That’s why I’m having to insert automation data note by note. And going from Sus to Legato at low level (Vel X-fade 20) there’s a huge jump, while at Vel 
    X-fade 110 it was smooth. Why haven’t all patches at corresponding velocity layers been brought to the same levels?

    Let’s forget the pizz and Sfz, but shouldn’t sus, legato, stacc, detache, trem, trill, have very close levels, allowing us to just draw in a bit of phrasing with the Exp-VelXfad faders, and the rest just sounding musically correct?

    Can someone from VSL please chime in and maybe tell me where I’m going wrong, or maybe what could be done? [B]


  • I too dislike the inconsistent volumes between patches as well as the round robins. It is possible to create your own matrices and presets that take these into account. There are 2 volume faders located in VIPro that can be adjusted to the specific patch without affecting the volume of the entire instance as a whole. There is one volume fader under Advanced: Mixer, and another one in the matrix list. I strongly recommend taking the time to create your own matrices and play through each of the patches while adjusting for volume. Honestly, this takes a long time. It took me nearly a week of 16 hour days to set up all my instruments.

  • I did some tests yesterday with the Fl. 1. In the mixer I was lowering the sus by 3 db. In some circumstances that can help, but didn’t find any consistent settings that would lessen the need to automate. And since each patch contains 2-4 samples (that we have no access to), there’s not much we can do.

    I know that there a tens of thousands of samples that would need to be reedited, but there are programs that should be able to automatically do that.


  • A convolution reverb such as Altiverb helps as well-- at least to my ears. In the convolving process somehow the transients seem less abrasive or get tempered in some way. I think what happens is that when two notes of moderately differing volumes occur shortly after one another, the early reflections tend to mask the jumps in volume. Have you tried to just "let it be" and throw a convolution on?

  • I would say that in the context of an entire mix, some of these jump do get masked, but when exposed, even through a great reverb, the differences stand out.

    I’d be interested to hear how others are getting around this?


  • Well, I’ve taken the time to do some more testing. Perturbed in thinking of all the time past and future lost just correcting levels with automation.

    I looked at the Trumpet in C short articulations, and the variances seems normal (+/- 3db) on several notes that I looked at.

    But a extremely bad case is in the Ob II. I tested the note E5 at velocity of 32.

    Stacc -24db, Port short -18db, Port med -11db, Port long vib -19db ..... so a 13 db difference in levels. So if you use all those in a phrase, you’ll probably be correcting at least half of them. The jump from -24db to -11db is incredible.

    And then surprise, still E5 but up to velocity 110, Stacc -15db, Port short -19, Port short -17db, Port med -14..... so the differences between the articulations are almost inverted with here the stacc being the loudest.

    It’s clear that there hasn’t been enough editing done on these samples.


  • I think this measures are good examples how certain instruments work in real world.

    In the higher register of an oboe, very short notes like staccato do have a higher volume range, because the tonal developement with staccatos are not as big compared to longer notes performing in pianissimo. Longer notes develop more tone therefor the volume is usually higher. Further performing vibrato needs more tonal stability, therefor medium long note with vibrato will be always much louder than a staccato in pp.

    In ff usually a staccato note will be always louder than a sustained, the whole possible impact is produced in short amount of time. Longer notes, if they are not performed marcatissimo develop the maximum within the note. Further the tone quality is more demanding on a long note compared to a staccato, the musician has to take care that the tone "works".

    Generally we take a lot of care in editing our samples, of course we can do mistakes - no question about that.

    What we have is a very good reference in our dynamic samples and the dynamic repetition patches for all the mapping values.

    If you need more control about dynamic ranges I strongly recommend VI PRO. Beside setting the individual volume of a certain patch in your template, you can adjust also the dynamic range of each patch individually. And at least you can setup volumegraphs on your keyboard to adjust volume for dedicated play ranges, even on single notes within a patch.

    best

    Herb


  • last edited
    last edited

    @herb said:

    very short notes like staccato do have a higher volume range

    That's something that I completely over looked. Don't get me wrong, I still stand by my suggestion to find ways to lessen the time it takes to fune-tune to the sound you want... but the inconsistancy between volumes makes a lot of sense now.

    With that, I would prefer to keep the range at the maximum as I think restricting the range of an instrument would then produce unnatural results. I suppose something like that has to be handled on a note by note basis, unless you want to restric the ranges or volumes in your template... which as I said might produce an instrument capable of less realistic dynamic flexibility.

    Herb (if you're still reading) or anyone else... I have an ultimatum-like question. I'd either want to strongly suggest ways of reducing the time it takes to fine-tune cross fading to get that perfect sound... (and I'm curious what attempts or concepts could possibly be implemented, if any, to help this) OR I would like some feedback on how to get those results quickly.

    ---------

    This is the only example I have already online but here goes... (Just keep in mind, I know a lot more about mastering, reverb, and mix since I made this... and it was in a massive hurry for a class I had last semester so there are plenty of problems with it.)

    http://soundcloud.com/iscorefilm/graduation-theme

    Below I posted a couple examples of the crossfading used. The french horn example includes more variation with the expression tool instead of the crossfade tool, but the point is that nothing sounded right, like what I imagine a horn player doing, until I drew it like this... and I didn't simply draw it then done, it took listening, drawing, listening, adjusting, a few times to get it the way that sounded right. - The Cello example is simply to show the kind of drawing it takes to get some good phrasing.

    I'm not saying that we shouldn't have to fine-tune... but I don't think anyone would disagree that with standard notation there isn't nearly as much detail given to the performer. Cubase is a performance tool, unlike sibelius, so I get that we need to fine-tune... but I need an easier way... and I can't use a simple mod wheel for it, I feel like 'by hand' is too sloppy when in most instances to get a sound that even sounds remotely real I have to do much 'finer' work than what doing by hand would produce.

    Any advice on how time can be saved here? Or any features possibilities that could lessen the time it takes to fine-tune? This is the whole reason I started the thread.

    Thanks,

    -Sean

    At 00:21 there is a french horn line

    http://img853.imageshack.us/img853/3040/crossfade02.jpg

    At 01:44 there is a cello line

    http://img854.imageshack.us/img854/3848/crossfade01.jpg


  • Thanks for you answer Herb.

    I must beg to differ though. My specs above reflect short notes. In the real world Stacc - Port short - Port med - Port long have a similar attack shape, they are just shorter or longer, with the portato articulations having of course decays that the stacc doesn’t.  These levels vary enoumously (Stacc -24db and Port med -11db). These portato long samples, do "develope" a bit, but I noted the loudest levels.

    You say that "medium long notes with vibrato will be always much louder than a stacc in pp" (I’m not sure that I agree). So if I play a med-long vib note, then a stacc (all in PP) it will sound musical without me editing anything? 

    Have you purposely edited stacc louder than the sus at FF? I’ll check and see if that is consistently the case.

    A performer can play a passage at FF alternating stacc, marcato, sus, legato, pizz, portato, etc, all at a very even level. Or, in another piece he can vary slightly, and in another there will be even more diferences. As you know, every phrase differs, every context differs.

    In the hands of a real musician there are literally millions of articulations because he shapes them according to the phrase, the spirit, and so on. But in the sample world, the articulations are always the morbid same old ones!. In order for me to create music with these samples I need vigorously consistent levels, so that I only have to draw in the "musical phrase and performance" with my automation. Then I can use the great humanize features to introduce "inconsistency", should that add to the context. Instead, far too often I have to spend literally hours with automation, just to bring the samples to a "neural level". 

    As I said, I found the short notes on Trumpet in C (I tested 4 different notes at both low and high vel) to being very playable and musical. But many of the WW and strings imo are much less playable.

    I may not have totally grasped all the capabilities for VIP (that I use) but I don’t see any way to "correct the levels" in any time saving & consistent fashion. And if there were settings to better the performance and these libraries, I would hope that VSL would furnish them to me.

    Can you give us your global editing criteria on what relationship the levels of the different patches  have at VSL? Are legatos always louder than sus? I haven’t detected any consistent pattern in the instruments that I’ve examined closely the last couple days. Do problems arise from the fact that there are sometimes 2, sometimes 3, sometimes 4 different velocity levels?

    To really speak well of these matters, we need lots of audio examples. I will be furnishing some in the coming days, and I invite others to do so as well.

    All best !


  • last edited
    last edited

    @Another User said:

    far too often I have to spend literally hours with automation, just to bring the samples to a "neural level".

    I agree with this 100% - Not all the time, but there are often times that to get the sound that best resembles a real player, things need to be neutral... sometimes it doesn't... But I do spend far too much time doing that, and the phrasing as I mentioned in my other post. I want that flexibility, I just don't want to spend hours per phrase to get that 'real' sound.

    I realize my comments might be redundant, I guess I just feel like this point isn't being heard very well and that it's absolutely essential to having a realistic and cost-effective workflow using samples.

    -Sean


  • I would love to get my hands on some settings that would lessen editing time. I’ve tried say lowering a stacc patch in a cell by say 3 db. That may "work" at PP velocity leve with regards to a sus patch, but at a FF level it won’t. By reducing the dynamic range of a patch, what is that going to do for me? I’ll probably need all the possible dynamic extremes.

    That’s not really what is at hand here though. We’re talking about basic stuff. At velocity 64, playing sus, stacc, & legato, and getting normal results on all instruments.

    If there are really workable settings in VIP, I’d love to have it. But if there were settings to make the basic stuff easier, why haven’t we heard about them?


  • You guys are thinking too much. I understand the wish to save time, but you've been using VSL libraries successfully for many years (I suspect), with these 'transition' faults, without batting an eye, until it became a forum topic. If it was a real issue it would have been addressed a long time ago by VSL. It's an affectation, not an issue. No one but those who listen on a microscopic level hear this, I'm sure. Who do you want to please? The people who listen, or the people who play? If listeners, it's a non-issue; if players, well, that's your ego talking. If yourself, learn to dumb it down like has always been the case.

    I am speaking grossly of course, maybe some patches need reworking, but in general, no one hears these slight differences, except the musician who wrote the piece; so move on and think in 8 bit for your listeners.

    Shawn


  • last edited
    last edited

    @jammusique said:

    I’ll probably need all the possible dynamic extremes

    That's just the point though... if an oboist can play a staccato note louder than a sustain... then having a full dynamic range would not allow for a consistant volume level between patches. - One thing I haven't played with is the x-fade on/off option. I would certainly be easier than crossfades in certain instances... but there are still plenty of times where I need cross fading on and I still run into the problem... so it would still be nice to have it addressed.

    To VSL (or anyone):

    This thread has consisted of two issues, 1) the volume consistancy bit, which has been addressed and 2) the crossfade bit about limiting the amount of work it takes to fine-tune to get a good result... which hasn't properly been addressed. I'm not trying to diminish the first point, but I created the thread entirely for the second point. Please, any answers, feature posibilities, suggestions for users?

    Thanks,

    -Sean


  • Hi all, I could'nt agree more about volume issues. I still think VSL have the best samples ever, but a lot of people are moving to other company products due to the compexity of VSL. I like this complexity when it permits me to use tons of articulations (as VSL "only" permits). But I like less the fact that I need to fine tune my volume when going from an articulation to another one. But my main concern is the phase on velocity crossfading! It has been created while ago and since it seems that time haas changed and a lot of improvement have been found. Would it be possible to change the velocity crossfading to try to avoid the most the phase and the "multi" instrument effect (ie when using solo violin and riding the xfading and hearing two violins because samples are layering). Regards

  • last edited
    last edited

    @Hicks said:

    my main concern is the phase on velocity crossfading!

    I'm glad you mentioned this. I've been writing and noticing it, then thinking to post that on this thread... but I kept forgetting. I agree completely! Phasing is definitely a concern I have and the solo violin was the exact example I was using a few hours ago.

    Could a 'crossfade humanize' feature be possible? Something to automate the imperfections that a real player has... the reason I mention this is because VSL's samples are recorded as plain and even as possible. I certainly get why, there is a need for it in many ways... but it's also in every way inhuman. I still want the ability to fine tune, but the current humanize feature for pitch imperfections allows for both fine tuning and a degree of automatic 'humanization'. Having a similar feature for crossfading, making things more human rather than very flat dynamically in my opinion would be an essential feature.

    In the meantime, I agree... VSL still has the best samples available...

    -Sean


  • last edited
    last edited

    @Hicks said:

    But my main concern is the phase on velocity crossfading! It has been created while ago and since it seems that time haas changed and a lot of improvement have been found. Would it be possible to change the velocity crossfading to try to avoid the most the phase and the "multi" instrument effect (ie when using solo violin and riding the xfading and hearing two violins because samples are layering). Regards

     

    This is a very real concern and I would hope that VSL is actively working on solving it. Phase aligning such a huge sample set is a really daunting task, so i would settle for just the legato and sustain samples. However, this would not suit the people who like to do all dynamic control with a continuous controller. This has always seemed counter-intuitive to me, as "one-shot" samples are perfectly suited to velocity control, but would those people be prepared to accept a compromise?

    AFAIK the only sample library developer that allows you to cross-fade on solo instruments without phasing is Samplemodeling, and that is not really a sample library. I would imagine that phase aligning a few MB of samples takes far less time than a few hundred GB. [;)]

    DG