Using a neuroheadset to work with VST.NET

Topics: 3rd Party Libs, Host Development, Midi, My, Newbie, Other, Plugin Development
Nov 28, 2013 at 11:13 AM
Hey people

As some of you already know, we are working on a project to create a system which uses a neurotransmitter device to get data from the brain and then process it into a sound output, as if you are composing music with your mind.

Although we have had quite some progress recently (thanks a lot to Jacobi and Yuri), we are still facing some issues. The one at hand is that the headset is not a MIDI device, and we don't plan it to be, but it would be much easier if we could "cheat" some of the samples (particularly - MidiNoteMapper) so that the constant stream of processed information coming from the headset would be understood and translated into notes. How can we do that? Can we simulate the MidiEvent, in a way?

Thank you!
Nov 28, 2013 at 11:16 AM
Yes, that would seem a good way to approach it. That way you can use any synth down the line to create the actual sound.

The main question is: do you have an algorithm that will generate these Midi notes in the way you plan to..?

If you do, refer to the MidiNoteMapper plugin sample to see how you can create and output Midi events from a VST.NET plugin...
Nov 28, 2013 at 11:46 AM
Edited Nov 28, 2013 at 11:52 AM
If I understand correctly, MIDI notes have some defined numbers which represent them. So if we would make an algorithm, it would have to produce these numbers. I think that would be easy to do, according to what we have already done.

So, let's say we will make that algorithm, but then - where are we supposed to place it? Where are we supposed to use it? Should we create an event handler for the Midi input events?? And what if we would not have any input events at all, since we are going to use threads for a constant flow of data?
Nov 28, 2013 at 12:34 PM
To play a note you have to create a Midi NoteOn event. To stop playing that note you have to create a Midi NoteOff event. The time at which you output these events correspond to the time the note will actually start and stop to play. So I would guess that as the stream of neural data comes in, you just keep creating the midi events based on that data.

The act of outputting midi events (not just notes but any midi event) from a VST.NET plugin is very simple. You create the corresponding MidiEvent instances, add them to a collection and pass that collection onto the host for the current processing cycle. The MidiNoteMapper does this.

For more info on MIDI:
Nov 28, 2013 at 12:52 PM
Okay, thanks a lot, we will now proceed to try and implement those things, will let you know how it goes :)
Nov 29, 2013 at 4:50 PM
Down the line you'll need to translate values from one range to another. These might be helpful:
        public static int LinearConversion(int oldValue, int oldMin, int oldMax, int newMin, int newMax)
            int newValue;
            int oldRange = (oldMax - oldMin);
            int newRange = (newMax - newMin);
            newValue = (((oldValue - oldMin) * newRange) / oldRange) + newMin;

            return newValue;

        public static float LinearConversion(float oldValue, float oldMin, float oldMax, float newMin, float newMax)
            float newValue;
            float oldRange = (oldMax - oldMin);
            float newRange = (newMax - newMin);
            newValue = (((oldValue - oldMin) * newRange) / oldRange) + newMin;

            return newValue;