Vienna Symphonic Library Forum
Forum Statistics

191,202 users have contributed to 42,788 threads and 257,323 posts.

In the past 24 hours, we have 2 new thread(s), 3 new post(s) and 45 new user(s).

  • What graphics card is recommended for the maximum GPU audio performance?

    I'm guessing it's the NVIDIA RTX 4090, but would like to hear about it.

    Are there benchmarks available? Some kind of test which show how much processing is offloaded with different models? Are there latency benchmarks as well? Do the higher-end graphics cards yield lower RTL? If so, how by how much?


  • last edited
    last edited

    @VirtualVirgin said:

    I'm guessing it's the NVIDIA RTX 4090, but would like to hear about it.


    Are there benchmarks available? Some kind of test which show how much processing is offloaded with different models? Are there latency benchmarks as well? Do the higher-end graphics cards yield lower RTL? If so, how by how much?

    Hi @VirtualVirgin, the answer is quite simple: as much as you can afford 😊

    Regarding accurate testing, I think any modern desktop-grade GPU would handle any workload you practically need. If we speak about desktop-grade GPUs to recommend, you can buy RTX 4060 Ti with 16 gigs of memory for $450-$500 on Amazon U.S...

    I can imagine 16 gigs will work for 99.9% of the cases, a few hundred instruments, multichannel... of MIR Pro 3D

    Once new products are supported (hopefully, soon) you will still have this decent amount of memory enough to empower the largest session you ever created.

    For laptop-grade GPUs I would not recommend going down further than RTX 4080, which is most often packed with 12 gigs. It costs around $2300-$2500 on Amazon U.S.

    The same goes for AMD equivalents.

    Macs are different but I would not ever recommend you going below 16 gigs. I use my M2 Air packed with 24 gigs, and it's enough for everything, yet it's powerful enough to do anything I want (that's probably why I don't want to go with Macbook Pros regardless I have a budget).

    I hope that helps!


  • @Sasha-T said:


    @VirtualVirgin said:

    I'm guessing it's the NVIDIA RTX 4090, but would like to hear about it.




    Are there benchmarks available? Some kind of test which show how much processing is offloaded with different models? Are there latency benchmarks as well? Do the higher-end graphics cards yield lower RTL? If so, how by how much?


    Hi @VirtualVirgin, the answer is quite simple: as much as you can afford 😊

    Regarding accurate testing, I think any modern desktop-grade GPU would handle any workload you practically need. If we speak about desktop-grade GPUs to recommend, you can buy RTX 4060 Ti with 16 gigs of memory for $450-$500 on Amazon U.S...

    I can imagine 16 gigs will work for 99.9% of the cases, a few hundred instruments, multichannel... of MIR Pro 3D


    Once new products are supported (hopefully, soon) you will still have this decent amount of memory enough to empower the largest session you ever created.


    For laptop-grade GPUs I would not recommend going down further than RTX 4080, which is most often packed with 12 gigs. It costs around $2300-$2500 on Amazon U.S.

    The same goes for AMD equivalents.


    Macs are different but I would not ever recommend you going below 16 gigs. I use my M2 Air packed with 24 gigs, and it's enough for everything, yet it's powerful enough to do anything I want (that's probably why I don't want to go with Macbook Pros regardless I have a budget).


    I hope that helps!

    So just to be clear,

    You are saying that for GPU audio, the 4060 is better than the 4070 because the former has 16 GB whereas the latter has only 12 GB, and this is an improvement even though the 4070 outperfroms the the 4060 on most tests by roughly 40%?


  • @VirtualVirgin said:


    @Sasha-T said:




    @VirtualVirgin said:


    I'm guessing it's the NVIDIA RTX 4090, but would like to hear about it.






    Are there benchmarks available? Some kind of test which show how much processing is offloaded with different models? Are there latency benchmarks as well? Do the higher-end graphics cards yield lower RTL? If so, how by how much?




    Hi @VirtualVirgin, the answer is quite simple: as much as you can afford 😊

    Regarding accurate testing, I think any modern desktop-grade GPU would handle any workload you practically need. If we speak about desktop-grade GPUs to recommend, you can buy RTX 4060 Ti with 16 gigs of memory for $450-$500 on Amazon U.S...

    I can imagine 16 gigs will work for 99.9% of the cases, a few hundred instruments, multichannel... of MIR Pro 3D




    Once new products are supported (hopefully, soon) you will still have this decent amount of memory enough to empower the largest session you ever created.




    For laptop-grade GPUs I would not recommend going down further than RTX 4080, which is most often packed with 12 gigs. It costs around $2300-$2500 on Amazon U.S.

    The same goes for AMD equivalents.




    Macs are different but I would not ever recommend you going below 16 gigs. I use my M2 Air packed with 24 gigs, and it's enough for everything, yet it's powerful enough to do anything I want (that's probably why I don't want to go with Macbook Pros regardless I have a budget).




    I hope that helps!


    So just to be clear,


    You are saying that for GPU audio, the 4060 is better than the 4070 because the former has 16 GB whereas the latter has only 12 GB, and this is an improvement even though the 4070 outperfroms the the 4060 on most tests by roughly 40%?

    Hi @VirtualVirgin. Yes. Even though the 4070 has more CUDA cores than the 4060 Ti, it has 12 gigs of onboard memory, and there are versions of the 4060 Ti with 16 gigs of memory.

    VSL utilizes A LOT of convolutions for each instance of its software, which requires quite a decent amount of memory, so having an extra four gigs of memory will be way more beneficial than the extra 1500 CUDA cores if we compare two GPUs side-by-side.

    The latest-generation GPUs can practically handle any audio session, so the real limitation that you will see first isn't processing power (it will be always enough) but memory.


  • last edited
    last edited

    Thanks. Good to know how they stack up, as the assumption would be that the performance (for GPU Audio) is linear with the general benchmarks of the product lines (RTX 4060 < 4060 Ti < 4070 < 4070 Ti < 4080 < 4090).

    Another perfomance question:

    As a hypothetical,

    Would you find a 4060 Ti with 16GB suitable for running 128 instrument channels of MIR3D in 3rd Order Ambisonics (let's say with a modest 4 second tail decay)?

    Also, given that you are here and have recently announced a partnership qith Audio Modeling-

    How much would the CUDA cores count effect the performance for those instruments?

    I'm assuming memory would have little effect on those, and calculation bandwidth is the key.


  • last edited
    last edited

    @Sasha-T Regarding the 4060 ti 16GB, is there a preference regarding manufacturer? (MSI, Asus e.g.)


  • @VirtualVirgin said:

    Thanks. Good to know how they stack up, as the assumption would be that the performance (for GPU Audio) is linear with the general benchmarks of the product lines (RTX 4060 < 4060 Ti < 4070 < 4070 Ti < 4080 < 4090).


    Another perfomance question:


    As a hypothetical,


    Would you find a 4060 Ti with 16GB suitable for running 128 instrument channels of MIR3D in 3rd Order Ambisonics (let's say with a modest 4 second tail decay)?


    Also, given that you are here and have recently announced a partnership qith Audio Modeling-


    How much would the CUDA cores count effect the performance for those instruments?


    I'm assuming memory would have little effect on those, and calculation bandwidth is the key.


    Absolutely, 128 instruments even Dolby Atmos 7.1.4 is almost exact the benchmark configuration we run for performance profiling, so I can definitely confirm it should be sufficient

    This is a VSL forum, and I don't want to disclose the AM numbers yet. All I can give is that, people will love the benefits and will buy this. And, one of the reasons of partnering with VSL, AM or you name the company is, really not just "enabling software on the GPUs" - it's way beyond that. Integrating API creates something new, something 'unseen' before.


  • last edited
    last edited

    @MMKA said:

    @Sasha-T Regarding the 4060 ti 16GB, is there a preference regarding manufacturer? (MSI, Asus e.g.)

    Not really. The lower power / less cooling fans (i.e. 2 vs 3) / less noise and lower temperatures usually say a lot about the particular GPU model (and vendor), so I'd prioritize the numbers over the names


  • Thanks,

    With such a setup (7.1.4) what buffer size are you using and what RTL are you getting?

    @Sasha-T said:


    @VirtualVirgin said:

    Thanks. Good to know how they stack up, as the assumption would be that the performance (for GPU Audio) is linear with the general benchmarks of the product lines (RTX 4060 < 4060 Ti < 4070 < 4070 Ti < 4080 < 4090).




    Another perfomance question:




    As a hypothetical,




    Would you find a 4060 Ti with 16GB suitable for running 128 instrument channels of MIR3D in 3rd Order Ambisonics (let's say with a modest 4 second tail decay)?




    Also, given that you are here and have recently announced a partnership qith Audio Modeling-




    How much would the CUDA cores count effect the performance for those instruments?




    I'm assuming memory would have little effect on those, and calculation bandwidth is the key.



    Absolutely, 128 instruments even Dolby Atmos 7.1.4 is almost exact the benchmark configuration we run for performance profiling, so I can definitely confirm it should be sufficient

    This is a VSL forum, and I don't want to disclose the AM numbers yet. All I can give is that, people will love the benefits and will buy this. And, one of the reasons of partnering with VSL, AM or you name the company is, really not just "enabling software on the GPUs" - it's way beyond that. Integrating API creates something new, something 'unseen' before.


  • last edited
    last edited

    What effect does using the same graphics card to run, say 4 monitors (one of them 4K) and Powerhouse, on stability and performance when only running DAW software and MIR 3d Pro on Windows 11?


  • @bcslaam said:

    What effect does using the same graphics card to run, say 4 monitors (one of them 4K) and Powerhouse, on stability and performance when only running DAW software and MIR 3d Pro on Windows 11?

    Hi @bcslaam, it depends on the system. Mac OS X, to my impression, perfectly handles single GPU M-Silicon devices and you can expect it to run really smoothly. Windows has a few issues but it should work fine if you have a decent GPU.


  • last edited
    last edited

    I need to reiterate my question here as RTL is a very important factor in my setup (and indeed for many composers using VSTis heavily). What interface and buffer size are the GPU Audio team using to test their products and what kind of RTL is being reported for large VSTi templates with for deilvering high channel counts (such as 7.1.4)?

    @VirtualVirgin said:

    Thanks,


    With such a setup (7.1.4) what buffer size are you using and what RTL are you getting?


    @Sasha-T said:




    @VirtualVirgin said:


    Thanks. Good to know how they stack up, as the assumption would be that the performance (for GPU Audio) is linear with the general benchmarks of the product lines (RTX 4060 < 4060 Ti < 4070 < 4070 Ti < 4080 < 4090).






    Another perfomance question:






    As a hypothetical,






    Would you find a 4060 Ti with 16GB suitable for running 128 instrument channels of MIR3D in 3rd Order Ambisonics (let's say with a modest 4 second tail decay)?






    Also, given that you are here and have recently announced a partnership qith Audio Modeling-






    How much would the CUDA cores count effect the performance for those instruments?






    I'm assuming memory would have little effect on those, and calculation bandwidth is the key.





    Absolutely, 128 instruments even Dolby Atmos 7.1.4 is almost exact the benchmark configuration we run for performance profiling, so I can definitely confirm it should be sufficient

    This is a VSL forum, and I don't want to disclose the AM numbers yet. All I can give is that, people will love the benefits and will buy this. And, one of the reasons of partnering with VSL, AM or you name the company is, really not just "enabling software on the GPUs" - it's way beyond that. Integrating API creates something new, something 'unseen' before.



  • @VirtualVirgin said:

    I need to reiterate my question here as RTL is a very important factor in my setup (and indeed for many composers using VSTis heavily). What interface and buffer size are the GPU Audio team using to test their products and what kind of RTL is being reported for large VSTi templates with for deilvering high channel counts (such as 7.1.4)?


    @VirtualVirgin said:

    Thanks,




    With such a setup (7.1.4) what buffer size are you using and what RTL are you getting?




    @Sasha-T said:







    @VirtualVirgin said:



    Thanks. Good to know how they stack up, as the assumption would be that the performance (for GPU Audio) is linear with the general benchmarks of the product lines (RTX 4060 < 4060 Ti < 4070 < 4070 Ti < 4080 < 4090).








    Another perfomance question:








    As a hypothetical,








    Would you find a 4060 Ti with 16GB suitable for running 128 instrument channels of MIR3D in 3rd Order Ambisonics (let's say with a modest 4 second tail decay)?








    Also, given that you are here and have recently announced a partnership qith Audio Modeling-








    How much would the CUDA cores count effect the performance for those instruments?








    I'm assuming memory would have little effect on those, and calculation bandwidth is the key.







    Absolutely, 128 instruments even Dolby Atmos 7.1.4 is almost exact the benchmark configuration we run for performance profiling, so I can definitely confirm it should be sufficient

    This is a VSL forum, and I don't want to disclose the AM numbers yet. All I can give is that, people will love the benefits and will buy this. And, one of the reasons of partnering with VSL, AM or you name the company is, really not just "enabling software on the GPUs" - it's way beyond that. Integrating API creates something new, something 'unseen' before.





    Hi @VirtualVirgin,

    We support intra 1 ms buffers, and we tested Core Audio and some ASIO-compatible devices.

    What's more important is a full understanding of how the latency truly works. All plugins that work with DAWs (virtual / software, not hardware-based) simply have only latency mentioned in the audio settings, that's it.

    So if you have, let's say, a Lynx device running with 32 samples at 96 kHz, and you run any VST3 plugin, you have 0.33 ms to process this buffer on the side of a plugin. Everything else outside the plugin is excluded and fully depends on the other parts of your setup, including OS, drivers, hardware characteristics, and mode.

    Plugin developers work only with latency that is set in the DAW, and it practically doesn't matter which device you use to test it out: even the ASIO4ALL device is typically managed the same way by the DAW, so if it's 96@96 it's just 1 ms to process it all, not more, otherwise, you will hear a drop-out. All the other types of latencies just come on top of the DAW <> plugin latency as is and do not commit to the dropouts.



  • Yes, but I'm asking if you've done stress testing here to run MIR 3D GPU Audio with many output channels.

    Here is an alternate question you could elaborate on:

    How many channels of MIR 3D can you run at 32, 64, 128 buffer sizes conifigured to a 7.1.4 output (or other high channel format) in Nuendo or Reaper when GPU audio is enabled? What interface(s) are you using to test GPU Audio?

    What are the specs of the testing rigs that you use when you are pushing the performace of GPU Audio?

    @Sasha-T said:


    @VirtualVirgin said:

    I need to reiterate my question here as RTL is a very important factor in my setup (and indeed for many composers using VSTis heavily). What interface and buffer size are the GPU Audio team using to test their products and what kind of RTL is being reported for large VSTi templates with for deilvering high channel counts (such as 7.1.4)?




    @VirtualVirgin said:


    Thanks,






    With such a setup (7.1.4) what buffer size are you using and what RTL are you getting?






    @Sasha-T said:










    @VirtualVirgin said:




    Thanks. Good to know how they stack up, as the assumption would be that the performance (for GPU Audio) is linear with the general benchmarks of the product lines (RTX 4060 < 4060 Ti < 4070 < 4070 Ti < 4080 < 4090).










    Another perfomance question:










    As a hypothetical,










    Would you find a 4060 Ti with 16GB suitable for running 128 instrument channels of MIR3D in 3rd Order Ambisonics (let's say with a modest 4 second tail decay)?










    Also, given that you are here and have recently announced a partnership qith Audio Modeling-










    How much would the CUDA cores count effect the performance for those instruments?










    I'm assuming memory would have little effect on those, and calculation bandwidth is the key.









    Absolutely, 128 instruments even Dolby Atmos 7.1.4 is almost exact the benchmark configuration we run for performance profiling, so I can definitely confirm it should be sufficient

    This is a VSL forum, and I don't want to disclose the AM numbers yet. All I can give is that, people will love the benefits and will buy this. And, one of the reasons of partnering with VSL, AM or you name the company is, really not just "enabling software on the GPUs" - it's way beyond that. Integrating API creates something new, something 'unseen' before.










    Hi @VirtualVirgin,

    We support intra 1 ms buffers, and we tested Core Audio and some ASIO-compatible devices.

    What's more important is a full understanding of how the latency truly works. All plugins that work with DAWs (virtual / software, not hardware-based) simply have only latency mentioned in the audio settings, that's it.

    So if you have, let's say, a Lynx device running with 32 samples at 96 kHz, and you run any VST3 plugin, you have 0.33 ms to process this buffer on the side of a plugin. Everything else outside the plugin is excluded and fully depends on the other parts of your setup, including OS, drivers, hardware characteristics, and mode.

    Plugin developers work only with latency that is set in the DAW, and it practically doesn't matter which device you use to test it out: even the ASIO4ALL device is typically managed the same way by the DAW, so if it's 96@96 it's just 1 ms to process it all, not more, otherwise, you will hear a drop-out. All the other types of latencies just come on top of the DAW <> plugin latency as is and do not commit to the dropouts.



  • @VirtualVirgin said:

    Yes, but I'm asking if you've done stress testing here to run MIR 3D GPU Audio with many output channels.


    Here is an alternate question you could elaborate on:


    How many channels of MIR 3D can you run at 32, 64, 128 buffer sizes conifigured to a 7.1.4 output (or other high channel format) in Nuendo or Reaper when GPU audio is enabled? What interface(s) are you using to test GPU Audio?


    What are the specs of the testing rigs that you use when you are pushing the performace of GPU Audio?


    @Sasha-T said:




    @VirtualVirgin said:


    I need to reiterate my question here as RTL is a very important factor in my setup (and indeed for many composers using VSTis heavily). What interface and buffer size are the GPU Audio team using to test their products and what kind of RTL is being reported for large VSTi templates with for deilvering high channel counts (such as 7.1.4)?






    @VirtualVirgin said:



    Thanks,








    With such a setup (7.1.4) what buffer size are you using and what RTL are you getting?








    @Sasha-T said:













    @VirtualVirgin said:





    Thanks. Good to know how they stack up, as the assumption would be that the performance (for GPU Audio) is linear with the general benchmarks of the product lines (RTX 4060 < 4060 Ti < 4070 < 4070 Ti < 4080 < 4090).












    Another perfomance question:












    As a hypothetical,












    Would you find a 4060 Ti with 16GB suitable for running 128 instrument channels of MIR3D in 3rd Order Ambisonics (let's say with a modest 4 second tail decay)?












    Also, given that you are here and have recently announced a partnership qith Audio Modeling-












    How much would the CUDA cores count effect the performance for those instruments?












    I'm assuming memory would have little effect on those, and calculation bandwidth is the key.











    Absolutely, 128 instruments even Dolby Atmos 7.1.4 is almost exact the benchmark configuration we run for performance profiling, so I can definitely confirm it should be sufficient

    This is a VSL forum, and I don't want to disclose the AM numbers yet. All I can give is that, people will love the benefits and will buy this. And, one of the reasons of partnering with VSL, AM or you name the company is, really not just "enabling software on the GPUs" - it's way beyond that. Integrating API creates something new, something 'unseen' before.
















    Hi @VirtualVirgin,

    We support intra 1 ms buffers, and we tested Core Audio and some ASIO-compatible devices.

    What's more important is a full understanding of how the latency truly works. All plugins that work with DAWs (virtual / software, not hardware-based) simply have only latency mentioned in the audio settings, that's it.

    So if you have, let's say, a Lynx device running with 32 samples at 96 kHz, and you run any VST3 plugin, you have 0.33 ms to process this buffer on the side of a plugin. Everything else outside the plugin is excluded and fully depends on the other parts of your setup, including OS, drivers, hardware characteristics, and mode.

    Plugin developers work only with latency that is set in the DAW, and it practically doesn't matter which device you use to test it out: even the ASIO4ALL device is typically managed the same way by the DAW, so if it's 96@96 it's just 1 ms to process it all, not more, otherwise, you will hear a drop-out. All the other types of latencies just come on top of the DAW <> plugin latency as is and do not commit to the dropouts.



    It could be hundreds of MIR instruments, for latencies and the 12 ch configuration you mentioned, depending on the setup (hardware, including, CPU, software, including, the way plugins bridged / called by the DAW, and the DAW itself and its configuration i.e. threads count and threads priorities etc).

    The best performers so far are M-Silicon Max specs systems and Logic Pro, and some Windows, Reaper and Nvidia setups, at least from my personal experience.


  • Hi, I just checked the drivers for a gtx 3060 12GB on the official Nvidia site. There are two rather new versions. A Geforce Game ready driver - WHQL and a NVIDIA Studio Driver - WHQL. Which one do you recomment? They say the Studio driver is for "Content" work. Seems the most applicable driver. What are your thoughts?

    Regards:

    Willem


  • The more the better, although I have to say, that my GTX 1080 handles my wind ensemble with 22 instruments rather well. It only needs a third of my graphic card's computing power. Of course it isn't that many instruments, but it's impressive though


  • @Mary said:

    Hi, I just checked the drivers for a gtx 3060 12GB on the official Nvidia site. There are two rather new versions. A Geforce Game ready driver - WHQL and a NVIDIA Studio Driver - WHQL. Which one do you recomment? They say the Studio driver is for "Content" work. Seems the most applicable driver. What are your thoughts?


    Regards:


    Willem

    Hi @Mary, normally, there is no difference at all for our use case.