|
Some self praise because I implemented something even a lot of professional MIDI products don't have - Tap tempo / MIDI tempo sync, while recording, plus a demonstration app that allows you to tap tempo with your computer's keyboard.
It was quite a bit more difficult than I anticipated, because wrangling with different devices requires experimentation, and you don't want to clog the limited MIDI bus with never ending sync messages, but on the other hand if you send the minimum needed for timing (2 in theory) a lot of devices won't set their tempo from it immediately so it needs some finagling to get right. It's actually kind of complicated, logicwise. It took me the better part of the day to write the tap tempo demo as well, because figuring out the spec and the quirks took a lot longer than expected
Also god bless MIDI-OX. I should look it up again and see if i can pay for it. It's worth good money.
Anyway, my Midi dll now supports MIDI features even some pro tools don't have. Almost. I have actually another day or so of work on it, getting another half of it implemented using high resolution timers so it can send as well as receive them.
Another thing that's cool about it, is it's transparent. It converts the MIDI realtime sync into MIDI tempo change messages - usually only present in files, not over the wire - so a recorded sequence can automatically have the appropriate tempo changes in it for saving. And also my API supports receiving tempo change messages as well so it respects them even though they only come from files. It makes playing files easier.
It's kind of a convoluted contraption of sorts, but then MIDI is sort of kludgy and a well designed MIDI widget will handle the weird bits gracefully.
Anyway, yay me. This is a bear, but I'm almost out of the woods.
Real programmers use butterflies
|
|
|
|
|
Congratulations!
|
|
|
|
|
Well I spoke a little too soon. Turns out MIDI tempo sync isn't very accurate, at least from .NET, the MIDI drivers are not calling it accurately enough in terms of timing, at least by the time it gets to the .NET code (through all the P/Invoking through the callback and whatnot)
With MIDI-OX the results are better - my tap tempo app pretty reliably transmits the MIDI timestamps on time and MIDI-OX receives them on time so i know it's not the MIDI driver itself.
I've cranked the thread priority of my app, even though that shouldn't matter, and I made sure the particular message short circuits the regular handling and goes through a special abbreviated codepath to speed things up. Still it's off by a BPM or two in either direction.
I guess I'll leave it in because it kind of works, but also because when I sort out the issue I'll have the rest of the code in place. I wish I had a better solution.
Unfortunately, there's precious little documentation on using the low level MIDI API from windows
Real programmers use butterflies
|
|
|
|
|
I've actually thought about adding such a feature to my MIDI sequencer. One item you might be overlooking is I don't believe Windows sends keyboard keystrokes to your app with a high-resolution timestamp. I think it relies on SendMessage, which can wait in the Windows que until your app gets its timeslice. That could explain your variation (~20ms, I think, although boosting your thread priority might reduce that a bit).
|
|
|
|
|
I'm actually doing it from the console, and I've checked the output with MIDI-OX. It's fine.
It's receiving that's giving me trouble.
Real programmers use butterflies
|
|
|
|
|
I don't understand. The console is not a real-time window in Windows, to my knowledge. But you say that side is working, so...
Good luck with it.
|
|
|
|
|
The tap tempo app (transmitter app) uses the console and I've checked the timing against MIDI-OX and found it reliable. the console window does not have to be real time. Here is the code. Note that _PreciseUtcNowTicks is just getting a high resolution value for the system ticks like you can get from stopwatch. i'd use stopwatch but i think it can drift on some machines, based on what i've read. Note how most of this is a tight loop.
const int MIN_TEMPO = 50;
const int MAX_MSG_COUNT = 100;
Console.Error.WriteLine("Press escape to exit. Any other key to tap tempo");
using (var dev = MidiDevice.Outputs[1])
{
dev.Open();
long oldTicks = 0;
var amnt = 0d;
var next = 0L;
var dif = 0L;
var msgCount = 0;
while (true)
{
if (0 != oldTicks)
{
var dif2 = (_PreciseUtcNowTicks - oldTicks);
var tpm = TimeSpan.TicksPerMillisecond * 60000;
var amnt2 = tpm / (double)dif2;
if (amnt2 < MIN_TEMPO)
{
oldTicks = 0;
Console.Error.WriteLine("Tap tempo timed out for tempo less than " + MIN_TEMPO + "bpm");
}
}
if (Console.KeyAvailable)
{
if (ConsoleKey.Escape == Console.ReadKey(true).Key)
return;
if (0 == oldTicks)
{
Console.Error.WriteLine("Tap Started");
oldTicks = _PreciseUtcNowTicks;
}
else
{
dif = _PreciseUtcNowTicks - oldTicks;
var ts = new TimeSpan(dif);
var ms = ts.TotalMilliseconds;
var tpm = TimeSpan.TicksPerMillisecond * 60000;
amnt = tpm / (double)dif;
oldTicks = _PreciseUtcNowTicks;
Console.Error.WriteLine("Tapped @ " + amnt+"bpm "+ms+"ms");
next = _PreciseUtcNowTicks + (dif/24);
dev.Send(new MidiMessageRealTimeTimingClock());
msgCount=0;
}
}
else
{
if (MAX_MSG_COUNT > msgCount && 0 != next && _PreciseUtcNowTicks >= next)
{
next += dif / 24;
dev.Send(new MidiMessageRealTimeTimingClock());
++msgCount;
}
}
}
}
Real programmers use butterflies
modified 1-Jul-20 23:11pm.
|
|
|
|
|
The 'tight loop' has nothing to do with real time. Windows scheduler can interrupt your process at any time, as far as I know, so when it is interrupted your 'Console.KeyAvailable' will not be immediately called. A quick google brings up this page, with some info about the behind-the-scenes, which may or may not be appropriate.
So if I understand the problem, you are playing a composition through some app, and using this console app to inject timing messages into the playing composition? So you have device latency of the console app and latency of the playing app to take into consideration? (Or is latency overcame somehow through the 'MidiMessageRealTimeTimingClock' message (sending a precise tick?).
|
|
|
|
|
i didn't mean true realtime. I just meant realtime enough for MIDI. I know Windows is not a realtime OS.
And anyway, that app isn't the problem. Like I said, the timing of it is solid and I can verify that.
The other app - the receiver is the problem.
This is the routine. Note that the only code that executes on incoming time sync messages is the if block with this heading:
if (0!=_tempoSynchEnabled && 0xF8 == (0xFF & lparam))
That will evaluate to true when it's one of these messages and the rest of the code does not execute.
I cannot speed this up.
void _MidiInProc(IntPtr handle, int msg, int instance, int lparam, int wparam)
{
switch (msg)
{
case MIM_OPEN:
Opened?.Invoke(this, EventArgs.Empty);
break;
case MIM_CLOSE:
Closed?.Invoke(this, EventArgs.Empty);
break;
case MIM_DATA:
MidiMessage m;
if (0!=_tempoSynchEnabled && 0xF8 == (0xFF & lparam))
{
if (0 != _timingTimestamp)
{
var dif = (_PreciseUtcNowTicks - _timingTimestamp) * 24;
var tpm = TimeSpan.TicksPerMillisecond * 60000;
var newTempo = (tpm / (double)dif);
if (newTempo < _tempoSynchMininumTempo)
Interlocked.Exchange(ref _timingTimestamp, 0);
else
{
var timeNow = _PreciseUtcNowTicks;
if (0L==_tempoSyncTimestamp || 0L == _tempoSyncFrequency || (timeNow-_tempoSyncTimestamp>_tempoSyncFrequency))
{
var tmp = Tempo;
var ta = (tmp + newTempo) / 2;
Tempo = ta;
}
Interlocked.Exchange(ref _timingTimestamp, timeNow);
}
}
else
{
var timeNow = _PreciseUtcNowTicks;
Interlocked.Exchange(ref _timingTimestamp, timeNow);
}
}
else
{
m = MidiUtility.UnpackMessage(lparam);
_ProcessRecording(m);
Input?.Invoke(this, new MidiInputEventArgs(new TimeSpan(0, 0, 0, 0, wparam), m));
}
break;
case MIM_ERROR:
Error?.Invoke(this, new MidiInputEventArgs(new TimeSpan(0, 0, 0, 0, wparam), MidiUtility.UnpackMessage(lparam)));
break;
case MIM_LONGDATA:
case MIM_LONGERROR:
var hdr = (MIDIHDR)Marshal.PtrToStructure(new IntPtr(lparam), typeof(MIDIHDR));
if (0 == hdr.dwBytesRecorded)
return;
var status = Marshal.ReadByte(hdr.lpData, 0);
var payload = new byte[hdr.dwBytesRecorded - 1];
Marshal.Copy(new IntPtr((int)hdr.lpData + 1), payload, 0, payload.Length);
m = new MidiMessageSysex(payload);
var sz = Marshal.SizeOf(typeof(MIDIHDR));
_inHeader.dwBufferLength = _inHeader.dwBytesRecorded = 65536u;
_inHeader.lpData = _buffer;
_CheckOutResult(midiInPrepareHeader(_handle, ref _inHeader, sz));
_CheckOutResult(midiInAddBuffer(_handle, ref _inHeader, sz));
_ProcessRecording(m);
if (MIM_LONGDATA == msg)
Input?.Invoke(this, new MidiInputEventArgs(new TimeSpan(0, 0, 0, 0, wparam),m));
else
Error?.Invoke(this, new MidiInputEventArgs(new TimeSpan(0, 0, 0, 0, wparam),m));
break;
case MIM_MOREDATA:
break;
default:
break;
}
}
Real programmers use butterflies
|
|
|
|
|
I'm sorry. Your code is not as clean as you claim.
if (_tempoSynchEnabled && ((0xFF & lparam) == 0xF8)) {
Placing the result on the left is just ugly! And not using booleans in a boolean operation when you can???
Anyway, not being very familiar with your codebase, I'm assuming the _timingTimestamp is updated every played beat with the reference time. Or is it something triggered by the first console keydown message? Should be the first if you are changing tempo based on it. Then you need to have another structure for the first console keydown message. Or I am missing something. Hope this quick musing is helpful.
|
|
|
|
|
I can't use a boolean for that because interlocked exchange only works on 32-bit or 64-bit values.
That's documented where it's declared
And placing consts before vars is something you'll find among folks like me who cut their teeth on C and C++ when compilers didn't warn you about accidental assignments
_timingTimestamp holds the system ticks - basically the UTC "now" of the last timing message received. Every time, what we do is take the difference of this from the current time (also in ticks) and use that to get a period. We take the period - which is 1/24 of a quarter-note, and convert it to a tempo. We then use that tempo to update the incoming recording and such
Real programmers use butterflies
|
|
|
|
|
var dif = (_PreciseUtcNowTicks - _timingTimestamp) * 24;
var tpm = TimeSpan.TicksPerMillisecond * 60000;
var newTempo = (tpm / (double)dif);
I don't see any processing to take account of when the first keystroke from the console app reached this processing app. Yes I know about the old C coder's habit. Still ugly! Did not know about the interop though, can't you still treat it as a boolean, like you can in C++? A value of 0x0001 will still be 'true'.
|
|
|
|
|
I can't declare it as a boolean because booleans are 16 bit in .NET. C gets away with it in Win32 by declaring BOOL as a 32 bit value (IIRC)
The keystrokes aren't being sent to the receiver. Only the midi messages are.
In fact the keystrokes are simply used by the transmitter app so it can get the timing. The messages it then sends are at a frequency of 24 times that of the frequency of the keypresses.
That code i posted is called each time the receiver app receives a MIDI message. That if statement conditional you were questioning gets evaluated to true when one of these messages is received.
Real programmers use butterflies
|
|
|
|
|
But where did you handle the first keystroke sent by the console app? From what I saw, you only changed things based on the previous in-app tempo.
|
|
|
|
|
The first keystroke doesn't send a message. It simply starts the clock. Every subsequent keystroke adjusts the timing of the clock. The clock keeps running until you stop tapping fast enough (at least 50bpm). Then the clock stops and you must start over.
So basically, it's a treadmill. The first tap turns it on, and you fall off if you don't keep up at which point you have to get back on and do it again.
Real programmers use butterflies
|
|
|
|
|
Quote: I can't declare it as a boolean But you can still use it as boolean in that check, and have it 'cast' to bool. Unless you aren't using '0' in the same way you use 'false' for that variable. If that is the case, UGH!
|
|
|
|
|
i'd sooner declare a const like const int TS_FALSE = 0; and use that then try to cast an int to a bool for no other reason then so i can use "false" there.
The last thing I want to do is introduce spurious casts in code that's not fast enough as it is.
Real programmers use butterflies
|
|
|
|
|
If C#'s IL doesn't treat 'if (a==0)...' the same as 'if (a)...', I'd be very surprised. But then again, I'm not a C# expert.
|
|
|
|
|
I'm not sure what you mean in this context. I think we're talking about two different things.
Real programmers use butterflies
|
|
|
|
|
I doubt the front-end compiler creates different IL code for "if (intVarImpicitlyCastToBool) { }" and "if (intVar != 0) { }". So your hesitancy is baseless. But I could be wrong, because I don't know if C# is that stupid.
|
|
|
|
|
Oh I see what you mean. My experience with C# is that the compiler does very little optimization, even in release builds. Either way, the cast isn't necessary, and IMO doesn't add more clarity than a const would. And frankly, I don't think the zero is bad either, but maybe that's just from decades of treating 0 and false as functionally equivalent as it is in the C world.
Real programmers use butterflies
|
|
|
|
|
Oh, I think maybe i misunderstood what you were asking. Somewhere in the code it sees if _timingTimestamp is zero. If it is, it knows that's the first message. It will clear _timingTimestamp if the messages wind up dropping off or otherwise getting too far apart (timeout)
Real programmers use butterflies
|
|
|
|
|
If you are using the same timing message for the regular 24/beat as the console window sends, you will never be able to tell which is which. And you can't make the appropriate adjustment.
|
|
|
|
|
Umm. I don't need to differentiate which is which, I only need to know how far apart they occur.
A MIDI time clock signal is sent 24 times per quarter note.
That's all my tap tempo transmitter does.
The recorder sees these MIDI time clock signals.
Its job is simply to use them to determine the tempo solely based on how far apart they occur, and adjust its tempo accordingly when it sees them.
I don't need to distinguish between different clock signals (like beginning and end). there is no such concept in the MIDI wire protocol anyway.
Real programmers use butterflies
|
|
|
|
|
I was under the impression that you internally used _timeStamp...'s in the internal processing of the main program. In other words, they were already embedded in the stream for timing purposes. Good luck getting to the bottom of everything!
|
|
|
|