This project has moved and is read-only. For the latest updates, please go here.

syncing playback with application timer

Topics: Host Processing, Newbie
Feb 1, 2015 at 8:32 PM
my application initally used but i changed to now by following more or less the miditest example in audiovsttoolbox.

the software has step sequencer, there is a timer that checks on every tick (1/4 of a quarter note) if he should play something or has some notes to stop. in general this works, but with the notes are played at the wrong time, the higher the latency the worse.

the position property of the mixer naudio wavemixerstream32 i'm using for mixing the wavestream for all instruments has the following entries on each tick of my own timer:

i guess, that playing the notes just in time is a too naive approach and i somehow have to pre-calculate the notes for the next few ticks.

is this correct? do i then just set the deltaframe value of each midinote to take care of when a note should be played in the future? and how do i find this value? the position property of the mixer stream seems the only thing related to time.

my timer collects the notes and passes them to processevents. naudio takes care of the buffer and processreplacing. would be great if somebody could describe the big picture of how all this is supposed to come together. thank you in advance.
Feb 2, 2015 at 12:55 AM
Edited Feb 2, 2015 at 12:56 AM
The approach seems right to me except for NAudio thread taking care of processreplacing. Ideally you'd want processevents and processreplacing running sequentially in a VstHost thread and NAudio running in a separate thread. Are you using Asio? What kind of timer are you using? I've switched to PortAudio due to low performance of NAudio. Can't help much with deltaframes calculations as I'm playing the notes live, I don't perceive any serious latency problem at 512 buffer size when using 2 vsts wih Asio drivers.
Feb 2, 2015 at 11:24 AM
thanks. what i meant with "naudio taking care of the processreplacing" is that processreplacing is called when naudio requests new audio data. it does it less often the bigger the audio buffer is. hence the problem almost vanishes when using a very low latency of 10, but then of course the audio quality is getting pretty bad.

the two processes are completely detached this way. naudio is doing it's thing to play audio and calls processreplacing once in a while when it needs fresh audio data.
my timer calls processevents from time to time when it has new notes to play or old notes to stop.

my timer is just a simple system.timer. accurate enough for the purpose of firing an event every 16th note / semiquaver. issue is the same with asio.

looked into portsound and sounds like i should give it a try. at least i could find more information related to my problem of syncing the audio to a windows timer.
Feb 2, 2015 at 3:20 PM
ProcessEvents isn't meaned to be called from time to time, I suggest to call processevents every time just before calling processreplacing, they go in pair.

Push your events to a buffer and pop them when it's the right time to call processevents. Typically, in a playback host, you'd perform the calculation to see if an event from the queue fit in the next latency block, adjust the delta frame accordingly and pop it for processevent.

In my live host I just push it to the queue when it's time to send the event. I set my requirements at 512 buffer size 44Khz, so in a bad case scenario, the event would be about 512 samples late. That delay is not noticable, at least to me, we're talking below 1/10 of a second here.

However I would advise against using a non-multimedia timer like system.timer if you're looking for accuracy. Another factor is the Vst you are using because it will also introduce latency. One thing I haven't undestand is what you mean by play at the wrong time, it's a broad definition. The problem could be drift, latency, thread synchronization... notes all play at wrong time in these cases but they also play consistently late or sometimes late or at random time.
Feb 3, 2015 at 4:14 PM
thanks again. so i changed the approach to put the noteon and noteoff events into a list and processevents is called every time before processreplacing is called, even if there are no events.

leaves me with two issues: the application timer, even when using a multimedia timer sometimes deviates by up to 30 ms which could explain something because with 124 bpm the timer is supposed to tick and check for new notes every 120 ms. not an isse to increase the buffer size here and instead of having a tick on every 16th note i could tick every quarter note and ask for the next four 16th note. anyways necessary i guess to be able to user higher latencies.

the other tricky part is how to adjust the deltatime for the midi events to the frequency in which processevents and processreplacing are called. made some measurement using the stopwatch class and on my machine with a small latency of 50 they are called every 50ms on average but with some outliers of 25ms in between. didn't think it through fully but in theory i should be able to use this stopwatch, check how much time passed since the last call, compare it to how much time should have been passed based on the latency, and modify the deltatime note events accordingly. if that works, i should be on the finishing line with this topic.
Feb 3, 2015 at 7:01 PM
so i changed the approach to put the noteon and noteoff events into a list and processevents is called every time before processreplacing is called, even if there are no events.
  • That's it, exactly. There's no clear direction in Vst documentation but I got the best plugin compatibility this way.
I think your approach to DeltaTime sounds right. Instead of using a stopwatch, you could calculate time elapsed with samples, the vst callback functions provide the amount of samples that need to be processed. If needed you can convert samples to milliseconds time elapsed using the sample rate. Usually the sample block will be very stable and return the latency you have set. Latency should be set in samples in NAudio and Vst.Net not milliseconds. 512 is a typical default, I wouldn't go under 128 samples without specialized hardware.
Feb 3, 2015 at 7:27 PM
Edited Feb 3, 2015 at 7:30 PM
If I were you, I'd convert everything to absolute samples for easier calculation.

First you'd need to calculate how much samples there is between each step of your sequencer using BPM and Sample Rate.
Every time you tick your sequencer forward, add that amount to a global sample position.
You'll then know that step X events should start at X sample position.
Alternatively, you could convert milliseconds to sample position on each step although it might be less precise.

Tag each event with its sample position.
In Vst.Net, accumulate the sample position at the end of each callback.
When you pop events to Vst.Net calculate the difference between Vst.Net sample accumulator and your event sample position, that should be your value reference for calculating DeltaFrame of this event.
If the DeltaFrame offset is greater than the size of Vst.Net latency buffer, keep it in your event queue.
Pop the events to Vst.Net only when it's due to occur in the next sample block requested.