Vienna Symphonic Library Forum
Forum Statistics

199,217 users have contributed to 43,158 threads and 258,930 posts.

In the past 24 hours, we have 2 new thread(s), 13 new post(s) and 65 new user(s).

  • Better than looping?

    last edited
    last edited
    King wrote:

    @Another User said:

    Rest assured that there options that will appear in the future that wont require looping every sample. If not from VSL then from me since I'll be making them myself. I've already designed some ideas that can work, but require loading multiple instruments and would be better to wait for Giga 3.

    There are also possible features in Giga 3 that will make it even more useful, and posssibly BETTER than looping the actual waveforms.


    (Moving this over to a new thread for a fresh start)

    King, are you hinting at real-time crossfading, or perhaps something more sophisticated than that (e.g. "morphing")? That actually suggests some interesting possibilities for performing parts rather than programming them - as you obviously realize.

    I've said it again and I'll say it before: I feel that the libraries are way ahead of the performance interface, and that's where there's the most room for growth. There just has to be a better way of telling the sampler which articulation you want than moving notes over to a separate track (or using program changes, which I don't like). Keyswitches are a great tool to have in the arsenal, but they become unwieldy when you have more than about five of them.

    There are many things that change with different articulations, but for the one we're talking about here - length - it seems to me that the sampler would just have to start timing after it receives a Note-on to know when to start going into its crossfading/morphing routine. Things certainly get more complicated that that, because there are many samples it could crossfade to. But I still think this the kind of direction things should be moving.

    A few months ago I posted about the idea of using the rate of CC change as a parameter to switch articulations. Georgio Tomasini programmed a little routine in Building Blocks (which you could also do in the Logic Environment) that switched MIDI channels in response to how quickly - not how hard! - you blow into a breath controller. So a fast CC rise switches to a fast attack articulation, a slow rise gives you a slower attack, etc. This would also have to time how long it's been since the note-on so it knows not to switch when you just want the level to rise quickly during a sustained note - i.e. my thinking is probably not fully baked - but I think this is on the right track?

    Some editing is always going to be necessary, but the goal for me would be to be able to "just play" most of the time.

  • A trumpet for instance has 3 valves, several feet of tubing, a flared bell, and you make a buzz sound into the mouthpiece. What could be easier than that? We should keep trying physical modeling. Imagine using sampled moutpiece buzzing to resonate a virtual instrument. Or how about a virtual mouthpiece for all the brass?

    It just seems to me the more we try to capture every darned articulation the more time we ought ot have spent on physical modeling. You can spend $1Mil and 5 years on creating the ultimate trumpet, covering nearly 80% of all possibilities (cause there are a lot, like half-valving, etc.). But instead we could have created a perfect acoustic virtual trumpet and have all teh physical possibilites in one G5 powered trumpet plugin!!!

    I know that Yamaha tried to do this a while back, but they didn't continue trying and that was the downfall of that technology. Also they did it inside a hardware synth. The enticing possilbity of using a G5 processor to run virtual acoustic instruments should re-ignite that fight. We need to continue to pursue it as we now even have Artificial Intelligence algorithms that we didn't have before.

    Honestly, ... 3 valves ... a pipe ... a flared bell ... and a buzz sound. Come on. Piece of cake. For the R&D we spend on the samples we ought to be able to put together something as simple as a real instrument, digitally.

    Evan Evans

  • Can AltiVerb use any sound as a Impulse? I don't fully understand how that works. But maybe if you sampled yourself doing the buzzing sound going from low to high, you could then play it back into a trumpet, and record the resultant response. Then you could use a buzz mouthpiece input , or sample set, as a sound source and feed it through the "trumpet" altiverb to achieve the trumpet sound.

    Anyone have any insight on if this could be done?

    Thanks,
    Evan Evans

    Dec 24. Merry Christmas, Happy Hanukah, Happy Holidays!

  • I have no idea of what King is talking about but agree completely with Nick on this - the VSL and other libraries are so far beyond the software samplers it is ridiculous. Especially astoundingly clumsy ones like gigastudio. The old EOS operating system was pure elegance compared to the non-intuitive, non-musical, non-human interface of giga.

    Also, just being able to "play" -- that is the ideal. Think of the art of playing that other keyboard - the piano. Everything from Beethoven to Liszt to Gershwin to Duke Ellington is available instantly, in real-time, if you are good enough to play it. Even though I use the VSL and believe it is the best sampling library ever created, I am still using an Emulator with the Miraslav library I completely re-programmed attempting to make as much playable in real-time as possible. I designed it to be able to access (by means of the EOS vastly-superior-to-gigastudio modulation routing) sostenuto, marcato, detache, as well as filtered and timbrally layered dynamics all in one pass by means of the combined pitch-wheel-mod-wheel-velocity controlling layers/volume/filter/detaches and marcati. So even though it is dwarfed by the VSL in sampling detail, it is so playable that you can do a recording very easily, maintaining a musical approach rather than aprogramming/technical/engineering approach. I think that Beat Kaufman said he did something similar with his recordings - first doing them on a system he could "play" them on, then re-doing them on the programming-intensive VSL.

    On the physical modeling though, it is in a way a different subject than sampling. It can create a really good fake of an instrument. But a sample is not a fake. It is a way of doing a recording of real performances one note or articulation at a time. And that is an aesthetically/philosophically different thing altogether. Probably in practice it could be almost the same, but I actually want the pure, actually performed samples of every articulation rather than advanced digital emulations, etc. because they directly correspond to notated and imagined musical ideas.

    What Evanevans is talking about though is very interesting and sounds like what Yamaha has done with the Vocaloid. That's an extremely exciting project but unfortunately doesn't have anything for classical/opera/film vocal sounds at least right now.

    By the way, I don't know if anyone is interested but Emu is making the E5 a software sampler. It is still being developed, but I would not be surprised if it blows all the others off the map, because even Emu's old operating system is still better in many respects than the current gigastudio and EXS.

  • last edited
    last edited
    Quickly in response to the E5 softsynth, I think that would be great. I still herald the Kurzweil as the best set of DSP and Envelopes for a smapler instrument, but we'd have to add release notes and some other new techniques to their mix. But I'd love to see a Kurzweil Plugin.

    @Another User said:

    On the physical modeling though, it is in a way a different subject than sampling. It can create a really good fake of an instrument. But a sample is not a fake. It is a way of doing a recording of real performances one note or articulation at a time. And that is an aesthetically/philosophically different thing altogether. Probably in practice it could be almost the same, but I actually want the pure, actually performed samples of every articulation rather than advanced digital emulations, etc. because they directly correspond to notated and imagined musical ideas.
    Yes true. Very good point. I agree with you. But I think there is another way to engineer the sampling of an instrument than by current means. I have some theories and am working with MIT on expanding some of my ideas, but I cannot talk about them at the moment. Just that a kind of combined physical modeling and sampling may be the best route to take us to the next level while we wait for the ultimate physical modeling to finish the whole debate all up once and for all, say 20 years from now.

    Evan Evans

  • As a sidenote to William -

    I agree that the EOS from E-mu was/is a mighty tool in the hands of the right people. But still I think that the ideal combination of a powerfull and flexible, yet straight-forward and PLAYABLE sampler was the EPS16+/ASR-10 line from Ensoniq.

    "EPS" was the abbreviation for Ensoniq Performance Sampler, and they meant it that way: Performance with a capital "P"! ... I still have mine (two of them - both packed away in my garage, admittedly 8-] ...)

    /Dietz

    /Dietz - Vienna Symphonic Library
  • Just to be clear, my point isn't to complain about the samplers we have. On the contrary, I really love working with them. A year and a half ago I was still mainly using Kurzweil K2000 for orchestral writing, and the difference is laughable.

    My point is that the playing interface is the next big frontier in sampling. The difference between sampling and physical modeling is only (almost, anyway) the way the sounds are produced. If a physical modeling synth can respond to your performance gestures by producing a certain sound, a sampler can too.

    Easier said than done, I know! [H]

  • I'm only hinting at options I see as "doable" in the near future with Giga 3.

    There are multiple approaches, and this is all that *I* see, there could be other things the VSL peeps are doing.

    One simple options would be the combination of "normal" sustains with the legato patches and programming automatic xfades between the two.

    Others are things that I'm not even supposed to know about, but I've been playing alot of Metal Gear solid and Splinter cell, so I've been able to recon some info by putting puzzle pieces together (among other things)

    About morphing. This is actually possible right now with some synthesis tools. I dont think its on par with actual sample Xfades tho jsut yet. MAybe in the future, or with a combination approach Xfadign from natural samples to syntheses for the actual morph, and back to natural. With the right settings it might be possible to get a good sound.

    I have a pretty unique idea about sample playback, but its dependant on a totally different engine than the sample playback engine. I've been experimenting tho, there are a ton of different ways to push the envelope.

  • Another person I need to get drunk and hear the tales...
    [:D]

  • hehehee

    make sure to bring lots of $$$ I'm not a cheap date [;)]

  • last edited
    last edited

    @evanevans said:

    A trumpet for instance has 3 valves, several feet of tubing, a flared bell, and you make a buzz sound into the mouthpiece. What could be easier than that? We should keep trying physical modeling. Imagine using sampled moutpiece buzzing to resonate a virtual instrument. Or how about a virtual mouthpiece for all the brass?


    It's not so much generating the waveform of the instrument - that's easy. The hard part is making that sound natural in a spatial environment. But people are working on that too. Then the hardEST part is the expressive power of the instrument as played by a human. I have no idea how to replicate that.