A VST that just outputs sampledata from an internet stream - where to start ?

Topics: Audio, Editor UI, Getting Started, Plugin Development, Plugin Parameters, Plugin Programs, VST.NET Core, VST.NET Framework, VST.NET Interop
Apr 14, 2011 at 9:26 PM


I'd like to create a very simple VST that connects to a live internet audio stream (for example a stream from radioreference.com) and outputs that sampledata.
This way I can use this VST in a track, and everytime I play the track the audiodata is live, and I can chain other effects on it.

What would be the best starting point ?
I know how to get the sampledata from the livestream, so what I need to know is how to actually create the VST that outputs it.
Is there a simple example of a VST that outputs sampledata, where I could just plug in the code to fetch the streamdata and use that as output ?



Apr 15, 2011 at 7:17 AM

Download the VS2008/2010 Project templates and choose a new Audio project. That gives you a working delay effect (similar to the Delay Sample).
Locate the AudioProcessor and adjust the number of audio inputs (NOTE it may very well be necessary to fake some inputs in order to be able to be called at all by the host).
Call your stream in the Process method of the AudioProcessor. Make sure you fill the output buffers with [-1.0, 1.0] values. For each channel there is a separate buffer. You may have to convert the audio samples from the the live stream to fit this format.

Build your project and deploy (copy) the [MyProject].dll, [MyProject].net.vstdll and the Jacobi.Vst.Core and the Jacobi.Vst.Framework assemblies to one folder (one that is scanned by the host) and start the host. You should be able to load the [MyProject].dll (choose this dll when asked to locate the plugin dll) into the host.

Thats about it. You should now have a VST plugin that injects your live stream feed into the host.

Hope it helps.

Apr 18, 2011 at 2:55 PM

Ok, cheers.
So I figure, since I'm only generating audio, I'll set the input-count to 0, and ignore the inChannels in the Process method.
In the Process method, for an effect-processor like the delay you write as much samples to the output as there are in the input, but since I have no inputs - how do I know how many samples I can write ?



Apr 18, 2011 at 5:00 PM

The output buffers all have a SampleCount property, which is the same for all channels (buffers). A Sample is one float value.


Apr 18, 2011 at 10:05 PM
Edited Apr 18, 2011 at 10:08 PM

So just to be sure, the output buffers' Samplecount property specifies how many samples are expected ?


By the way, I think it would be a good idea to have a sample that implements a synth that just outputs a sinewave at a midi-keyed frequency, as a base for synth-development.
(Or does something like this exist already and have I missed it ?)

Apr 19, 2011 at 5:21 AM

Yes. and the SampleCount property on all buffers is the same (because it came from a single native value). I just added the SampleCount property to the buffer class for ease - so you'd always have access to the correct size of the buffer for every reference to any buffer you might route through your program. And don't worry, if you try to write beyond the buffer's end, you'll get an ArgumentOutOfRange exception ;-)

Yes I think that would also be a good idea ;-) http://vstnet.codeplex.com/workitem/6165 You can vote on it, if you'd like. I just haven't had the time to work on it yet. I've been busy with several other home projects as well...

Hope it helps,

Apr 29, 2011 at 7:18 AM
Edited Apr 29, 2011 at 7:53 AM

Ok, I've got it up and running playing internet streams.... Almost.

Running it in vsthost it works perfectly.
Then I tried to use it in Ableton, and it doesn't work.

It's seen as an VSTfx, not as a VSTi, which makes sense since it was basicly built on top of the Delay sample.
So what do I exactly need to do to switch it from VSTfx to VSTi ?
Should it just be derived from IVstPluginMidiSource and/or should I do / implement something else as well ?

By the way, once I get my plugin to work they way it's supposed to work, I'll make the sources available so other people can have a look and poke fun at it :)

Apr 29, 2011 at 5:53 PM

My suggestion would be to have a dummy midi processor in your plugin. This tricks the host into thinking you need midi to produce the sound. This would also mean you can only insert the plugin on a midi track...? You could also think of setting the audio inputs to 2 just (and do nothing with them). This is just because some hosts have some weird notions how plugins should 'be'...

Hope it helps,

May 11, 2011 at 9:11 PM


I've changed my mind and settled for it being an audioprocessor for the time being. The trick was indeed that to set it to have 2 (unused) inputs.
Another question though : I have only one parameter in my AudioProcessor, the stream index (an int).
Currently when loading a track using my VST, it's always set to default.

How do I save / restore the state of my single parameter ?

May 12, 2011 at 5:27 AM

A "normal" parameter would typically not change due to operation of the plugin. For example: a volume parameter would typically not adjust itself.

Typically a (sequencer) Host will take a snapshot of the plugins parameter values and store them in the song. When the song is loaded into the host, those same parameter values are applied by the host to the plugin.

In your case I would suggest you implement the IVstPluginPersistence interface. There is a base class (Plugin namespace) that provides a typical implementation and only requires you to fill in the specifics of your plugin. Refer to the Delay sample plugin where it is implemented.

Hope it helps,