Combine NAudio and VST.NET in a host

Sep 27, 2010 at 10:18 AM

Hi Marc,

 

thanks for your great work! I try to implement a .net host for multiple vst plugins (audio stereo in and stereo out each). Starting with your samples.host I successfully got it working for one plugin. What is the most effective (managed) way to process a plugin chain? From your example with one plugin GenerateNoiseBtn_Click() reads:

 

// ...

int
inputCount = PluginContext.PluginInfo.AudioInputCount; int outputCount = PluginContext.PluginInfo.AudioOutputCount; int blockSize = 1024; VstAudioBufferManager inputMgr = new VstAudioBufferManager(inputCount, blockSize); VstAudioBufferManager outputMgr = new VstAudioBufferManager(outputCount, blockSize); foreach (VstAudioBuffer buffer in inputMgr.ToArray()) { Random rnd = new Random((int)DateTime.Now.Ticks); for (int i = 0; i < blockSize; i++) { // generate a value between -1.0 and 1.0 buffer[i] = (float)((rnd.NextDouble() * 2.0) - 1.0); } } PluginContext.PluginCommandStub.SetBlockSize(blockSize); PluginContext.PluginCommandStub.SetSampleRate(44100f); VstAudioBuffer[] inputBuffers = inputMgr.ToArray(); VstAudioBuffer[] outputBuffers = outputMgr.ToArray(); PluginContext.PluginCommandStub.MainsChanged(true); PluginContext.PluginCommandStub.StartProcess(); PluginContext.PluginCommandStub.ProcessReplacing(inputBuffers, outputBuffers); PluginContext.PluginCommandStub.StopProcess(); PluginContext.PluginCommandStub.MainsChanged(false);

Is it best to create one PluginContext for each plugin? And then two VstAudioBufferManagers and VstAudioBuffers for each plugin? How to call ProcessReplacing? In a loop? Could you kindly provide some lines of code? Thank you very much!

Coordinator
Sep 27, 2010 at 10:44 AM
Edited Sep 27, 2010 at 10:53 AM

Thanx for the kind words.

The only way to communicate with a plugin in VST.NET is through a PluginContext. So, yes you have to create one for each plugin (instance) you have to deal with.

How to call the ProcessReplacing methods on each plugin depends on what type/level of routing you allow for in your host. If your host only allows users to string together plugins without branching and/or merging than processing is fairly simple. Take the audio source (recorded or the output of a VSTi plugin) and pass it allong the chain of plugins. The output of one plugin is the input for the next plugin. Note that you can call the ProcessReplacing asynchronously. Meaning: you can call it again on the plugin when it is ready processing the last one. This pattern is also known as "pipes and filters" (look it up if you're not familiar with it). This method of processing doesn't force you to wait for one buffer of audio to finish processing by all the plugins in the chain. Instead it continues to pump audio buffers throught the chain of plugins as fast as they can process it. If there are big differences in latency in the processing speed of the plugins you may have to queue the audio buffers between plugins. You can imagine that this gets way more complicated when you allow plugins to be configured in parallel.

Having said that, I would suggest you start with a simple single chain of plugins that is processed synchronously. So you pass the initial audio buffer to the ProcessReplacing of the first plugin and pass its output to the input of the next plugin and so on. Be mindfull of the number of audio buffers that you keep alive during audio processing (memory consumption). When you have that working you can try for the asynchronous way.

I have perfromed an experiment in an attempt to develop a generic 'engine' that would process audio (and midi) for any configuration of connections in the most optimal way. I have tried to implement it with the new .NET 4.0 Task objects but have concluded that the default Task scheduling was not suitable for this problem. I Haven't had time to persue this further. You can find the code here: http://pfe.codeplex.com/

As you may have began to understand, this is no easy walk in the park ;-)

Hope it helps.

Sep 27, 2010 at 11:03 AM

Wow, that was a FAST answer, thank you! I'll try the synchronous way first, as the chain has no branches and is fairly small (<6). My idea is to set up a list of PluginContext's in chronological order and two global VstAudioBufferManagers and VstAudioBuffers. A "run"-method iterates through the context-list, calling ProcessReplacing for each plugin with the apropriate buffer order. The output of the last plugin is the input for the next. The final output is through NAudio. I'll post my experiences.

Sep 30, 2010 at 4:42 PM
Edited Sep 30, 2010 at 4:45 PM

Hi Marc,

VST.NET runs perfectly :-) However there is one question left. Following the NAudio source code, it delivers stereo samples in this way in a buffer: LRLRLR... In this scenario I chose 32 bit samples (float).

VstAudioBuffer reserves samples for the channels separately, right? Meaning:
VstAudioBuffer[0] = LLLLLL...
VstAudioBuffer[1] = RRRRRR...

I tried to extract channel data like this, but it didn't work out. The sound is choppy or an unintended upsampling occures, so obviously I got something wrong :D Instead, while completely ignoring channels, it just works fine. But all channel data is stored in VstAudioBuffer[0], which is not intended, right?

For testing, I used your delay plugin, which is stored in plugins[0]. It seems, as if the plugin is written to process a mono stream only. Nevertheless, I delivered the miltiplexed LRLR... stream and it worked. Any hint, how to do it right is greatly appreciated!

The program gets audio data from a .wav file via NAudio, processes it with the delay plugin and outputs it again to the soundcard with NAudio. The classes are attached. By the way: thanks to the discussions here, which helped a lot to write the code!

 

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
 
using NAudio.Wave;
using NAudio;
using Jacobi.Vst.Core;
using Jacobi.Vst.Core.Host;
using Jacobi.Vst.Framework;
using Jacobi.Vst.Interop.Host;
 
namespace AudioIO
{
    class AudioOutput : IDisposable
    {
        // NAudio Player
        private IWavePlayer playbackDevice = null;
        private VSTStream vstStream = null;
        private List<IVstPluginCommandStub> plugins;
 
        public AudioOutput(List<IVstPluginCommandStub> plugins)
        {
            this.plugins = plugins;
            Init();
            Play();
        }
 
        public void Init()
        {
            // 4410 samples == 100 milliseconds
            vstStream = new VSTStream(44100, 2, 4410, this.plugins);
            playbackDevice = new WaveOut(WaveCallbackInfo.FunctionCallback());
            playbackDevice.Init(vstStream);
        }
 
        public void Play()
        {
            if (playbackDevice != null && playbackDevice.PlaybackState != PlaybackState.Playing)
                playbackDevice.Play();
        }
 
        public void Stop()
        {
            if (playbackDevice != null && playbackDevice.PlaybackState != PlaybackState.Stopped)
                playbackDevice.Stop();
        }
 
        public void Dispose()
        {
            if (playbackDevice != null)
            {
                playbackDevice.Pause();
                playbackDevice.Stop();
                playbackDevice.Dispose();
                playbackDevice = null;
            }
        }
    }

    class VSTStream : WaveProvider32
    {
        public long Length { get { throw new System.NotSupportedException(); } }
        public long Position { get { throw new System.NotSupportedException(); } set { throw new System.NotSupportedException(); } }
 
        public List<IVstPluginCommandStub> plugins;
        VstAudioBufferManager vstBufManIn, vstBufManOut;
        
        private VstAudioBuffer[] vstBufIn = null;
        private VstAudioBuffer[] vstBufOut = null;
 
        private byte[] naudioBuf;
        private int sampleRate, channels, blockSize;
        private WaveChannel32 wavStream;
 
        public VSTStream(int sampleRate, int channels, int blockSize, List<IVstPluginCommandStub> plugins)
            : base(sampleRate, channels)
        {
            this.plugins = plugins;
            this.sampleRate = sampleRate;
            this.channels = channels;
            this.blockSize = blockSize;
 
            plugins[0].SetBlockSize(blockSize);
            plugins[0].SetSampleRate((float)sampleRate);

            // The " * channels" is because left and right channel is stored in one buffer
            // This should be wrong...
            vstBufManIn = new VstAudioBufferManager(channels, blockSize * channels);
            vstBufManOut = new VstAudioBufferManager(channels, blockSize * channels);
 
            vstBufIn = vstBufManIn.ToArray();
            vstBufOut = vstBufManOut.ToArray();
 
            // 4 bytes per sample (32 bit)
            naudioBuf = new byte[blockSize * channels * 4];
 
            wavStream = new WaveChannel32(new WaveFileReader("test.wav"));
            wavStream.Volume = 1f;
        }
 
        // buffer is call by reference!!!
        public override int Read(float[] buffer, int offset, int sampleCount)
        {
            int bytesRead = wavStream.Read(naudioBuf, offset, sampleCount * 4);
 
            unsafe
            {
                fixed (byte* byteBuf = &naudioBuf[0])
                {
                    float* floatBuf = (float*)byteBuf;
                   
                    // splitting the stream requires different indexing than here
                    for (int i = 0; i < sampleCount; i++)
                    {
                        vstBufIn[0][i] = *(floatBuf + i);

                        //vstBufIn[0][i] = *(floatBuf + 2 * i);
                        //vstBufIn[1][i] = *(floatBuf + 2 * i + 1);
                    }
                }
            }
 
            plugins[0].MainsChanged(true);
            plugins[0].StartProcess();
            //vstBufOut = vstBufIn;
            plugins[0].ProcessReplacing(vstBufIn, vstBufOut);
            plugins[0].StopProcess();
            plugins[0].MainsChanged(false);
 
            
            unsafe
            {
                float* tmpBufL = ((IDirectBufferAccess32)vstBufOut[0]).Buffer;
                //float* tmpBufR = ((IDirectBufferAccess32)vstBufOut[1]).Buffer;

                // multiplexing the stream requires different indexing than here
                for (int i = 0; i < (sampleCount); i++)
                {
                    //if (i % 2 == 1)
                    //{
                        buffer[i] = *(tmpBufL + i);
                    //}
                    //else 
                    //{
                    //    buffer[i] = *(tmpBufR + i);
                    //}
                }
            }
            
            return sampleCount;
        }
    }
}

 

And calling the class with:

try
{
    String pluginPath = "Jacobi.Vst.Samples.Delay.dll";
    hostCmdStub = new HostCommandStub();
 
    VstPluginContext ctx = VstPluginContext.Create(pluginPath, hostCmdStub);
 
    // add custom data to the context
    ctx.Set("PluginPath", pluginPath);
    ctx.Set("HostCmdStub", hostCmdStub);
 
    // actually open the plugin itself
    ctx.PluginCommandStub.Open();
 
    audioOut = new AudioIO.AudioOutput(
        new List<IVstPluginCommandStub>() {ctx.PluginCommandStub});
}
catch (Exception ex)
{
    MessageBox.Show(ex.ToString());
}

 

Coordinator
Sep 30, 2010 at 5:19 PM

Other people have wrestled with this problem too. Take a look ate the following discussions and let me know if you cant get it working.

http://vstnet.codeplex.com/Thread/View.aspx?ThreadId=216682

http://vstnet.codeplex.com/Thread/View.aspx?ThreadId=205889

http://vstnet.codeplex.com/Thread/View.aspx?ThreadId=204934

http://vstnet.codeplex.com/Thread/View.aspx?ThreadId=79792

or search the discussion list on "NAudio".

You could also contact Mark (Author of NAudio) and ask him for help in 'unlacing' the interlaced buffers.

Hope it helps.
Marc.

Oct 2, 2010 at 12:37 AM
Edited Oct 2, 2010 at 12:40 AM

Hi Daniel!

// The " * channels" is because left and right channel is stored in one buffer
// This should be wrong...
vstBufManIn = new VstAudioBufferManager(channels, blockSize * channels);
vstBufManOut = new VstAudioBufferManager(channels, blockSize * channels);
 
Yes indeed, I think you shouldn't multiply the block size by the number of channels because VstBuffer has an array dimension for each channel. <br>Try:

vstBufManIn = new VstAudioBufferManager(channels, blockSize);
vstBufManOut = new VstAudioBufferManager(channels, blockSize);

// A 2 channels NAudio buffer should be twice the size of VstBuffer
int j = 0;

for (int i = 0; i < VstSampleCount; i++)
{
     vstBufIn[0][i] = NAudioBuffer[j];
      j++;
      vstBufIn[1][i] = NAudioBuffer[j];
      j++;
}

Oct 5, 2010 at 10:22 AM
Edited Oct 5, 2010 at 10:33 AM
YuryK wrote: 
int j = 0; 

for (int i = 0; i < VstSampleCount; i++)
{
    vstBufIn[0][i] = NAudioBuffer[j];
    j++;
    vstBufIn[1][i] = NAudioBuffer[j];
    j++;
}

 

The error in reasoning is now solved, thanks YuryK and Marc ;-)

This is the working Read() for stereo = 2 channels:

 

// buffer is call by reference!!!
public override int Read(float[] buffer, int offset, int sampleCount)
{
    int bytesRead = wavStream.Read(naudioBuf, offset, sampleCount * 4);

    unsafe
    {
        fixed (byte* byteBuf = &naudioBuf[0])
        {
            float* floatBuf = (float*)byteBuf;
            int j = 0;
            for (int i = 0; i < sampleCount / channels; i++)
            {
                vstBufIn[0][i] = *(floatBuf + j);
                j++;
                vstBufIn[1][i] = *(floatBuf + j);
                j++;
            }
        }
    }

    plugins[0].MainsChanged(true);
    plugins[0].StartProcess();
    plugins[0].ProcessReplacing(vstBufIn, vstBufOut);
    plugins[0].StopProcess();
    plugins[0].MainsChanged(false);

    unsafe
    {
        float* tmpBufL = ((IDirectBufferAccess32)vstBufOut[0]).Buffer;
        float* tmpBufR = ((IDirectBufferAccess32)vstBufOut[1]).Buffer;
        int j = 0;
        for (int i = 0; i < (sampleCount / channels); i++)
        {
            buffer[j] = *(tmpBufL + i);
            j++;
            buffer[j] = *(tmpBufR + i);
            j++;
        }
    }
    
    return sampleCount;
}

 

Oct 12, 2010 at 7:43 AM
Edited Oct 12, 2010 at 7:47 AM

I stumbled upon a strange phenomenon. When using the sample delay plugin, the following code works in the read method. It is called for every NAudio buffer (100 ms per buffer):

plugins[0].MainsChanged(true);
plugins[0].StartProcess();
plugins[0].ProcessReplacing(vstBufIn, vstBufOut);
plugins[0].StopProcess();
plugins[0].MainsChanged(false);

But when trying other plugins, for example Freeverb or Wow 'n' Flutter audio output is stuttering. I wrote a wav file from the output and it turned out, that every 100 ms there is a small bit of silence in the output. When putting the MainsChanged and Start/StopProcess calls out of the read method, it works! So they are just called once, when playing starts and stops, not for every 100 ms.

Additionally the NAudio Play() and Stop() methods have to be called BEFORE the MainsChanged and Start/StopProcess calls, otherwise there is a deadlock in audio output. Just a hint for the readers here...

Coordinator
Oct 12, 2010 at 8:53 AM

The calls to Mainschanged and Start/Stop Process should be made only once, not for every cycle in the audio processing.

So it should look something like:

[plugin.Open()]

plugin.MainsChanged(true) // turn on 'power' on plugin.
plugin.StartProcess() // let the plugin know the audio engine has started

while(audioEngineIsRunning)
{
    plugin.ProcessReplacing(inputBuffers, outputBuffers)  // letplugin process audio stream
}

plugin.StopProcess()
plugin.MainsChanged(false)

[plugin.Close()]

Hope it helps,
Marc

Oct 12, 2010 at 8:55 AM

Ok, cool! Exactly what I experienced ;-)

Mar 30, 2014 at 11:09 PM
I've been playing with this code and it actually works, and works well!

Have a question i was trying to do an effects bypass such as plugins[0].SetBypass(true); and it is not an working.

I'm i doing this right?

Thanks for any help
-Jeff
Coordinator
Mar 31, 2014 at 6:25 AM
Hi Jeff,

If a plugin support its own bypass it will respond to a 'bypass' CanDo (VstPluginCanDo.Bypass). If it does not respond to that can-do string, it does not support bypass and the host should implement it.

The VST.NET Framework implements the CanDo method by checking the interfaces for the various features (IVstPluginBypass in this case).

Hope it helps.
Marc

PS: I would prefer if you start new discussions for new topics. Thanx.