Python music sequencer timing problems

B

badmuthahubbard

I've been trying to get the timing right for a music sequencer using
Tkinter. First I just loaded the Csound API module and ran a Csound
engine in its own performance thread. The score timing was good,
being controlled internally by Csound, but any time I moved the mouse
I got audio dropouts.
It was suggested I run the audio engine as a separate process, with
elevated/realtime priority and use sockets to tell it what to play,
and that way, too, people could set up servers for the audio on
different CPUs. But I've found that the method I came up with for
timing the beats/notes is too slow (using threading.Timer on a
function that calls itself over and over), and the whole thing played
too slowly (and still gave me noise when moving the mouse). I've been
using subprocesses, but I'm now wondering if sockets would or could
make a difference.

The overall goal is this: when the user wants to audition a piece,
create an audio engine process with elevated/realtime priority. This
engine also has all the synthesis and sound processing rules for the
various instruments, due to the way Csound is structured. Set up a
scheduler- possibly in another process, or just another thread- and
fill it with all the notes from the score and their times. Also, the
user should be able to see a time-cursor moving across the piece so
they can see where they are in the score. As this last bit is GUI,
the scheduler should be able to send callbacks back to the GUI as well
as notes to the audio engine. But neither the scheduler nor the audio
engine should wait for Tkinter's updating of the location of the time-
cursor. Naturally, all notes will have higher priorities in the
scheduler than all GUI updates, but they won't necessarily always be
at the same time.

So, I have a few ideas about how to proceed, but I want to know if
I'll need to learn more general things first:
1.
Create both the scheduler and the audio engine as separate processes
and communicate with them through sockets. When all events are
entered in the scheduler, open a server socket in the main GUI process
and listen for callbacks to move the cursor (is it possible to do this
using Tkinter's mainloop, so the mouse can be moved, albeit
sluggishly, at the same time the cursor is moving continuously?); the
audio engine runs at as high priority as possible, and the scheduler
runs somewhere between that and the priority of the main GUI, which
should even perhaps be temporarily lowered below default for good
measure.

or

2.
Create the audio engine as an elevated priority process, and the
scheduler as a separate thread in the main process. The scheduler
sends notes to the audio engine and callbacks within its own process
to move the GUI cursor. Optionally, every tiny update of the cursor
could be a separate thread that dies an instant later.

3.
Closer to my original idea, but I'm hoping to avoid this. All note
scheduling and tempo control is done by Csound as audio engine, and a
Csound channel is set aside for callbacks to update the cursor
position. Maybe this would be smoothest, as timing is built into
Csound already, but the Csound score will be full of thousands of
pseudo-notes that only exist for those callbacks. Down the road I'd
like to have notes sound whenever they are added or moved on the
score, not just when playing the piece, as well as the option of
adjusting the level, pan, etc. of running instruments.

It seems method 2 runs the risk of slowing down the timing of the
notes if the mouse moves around; but method 1 would require setting up
an event loop to listen for GUI updates from the scheduler. I was
trying method 1 with subprocesses, but reading from the scheduler
process's stdout PIPE for GUI updates wasn't working. I was referred
to Twisted and the code module for this, and haven't yet worked out
how to use them appropriately.

I don't mind a complex solution, if it is reliable (I'm aiming at
cross-platform, at least WinXP-OSX-Linux), but everything I try seems
to add unnecessary complexity without actually solving anything. I've
been reading up on socket programming, and catching bits here and
there about non-blocking IO. Seem like good topics to know about, if
I want to do audio programming, but I also need a practical solution
for now.

Any advice?

Thanks a lot.
-Chuckk
 
B

Bad Mutha Hubbard

John said:
Hi Chuckk,

I've recently been fooling with something involving timing and synchronising
multiple note-on/note-off events, and also tried threading and subprocesses
without much success - out of the box, the high-tempo precision wasn't there.

But I had more luck with this approach, if it's not way too simple for your
purpose (I'm not using a gui, for example); in psuedocode:

from time import time, sleep

start = time()
for event in music:
duration=len(event) #Really, the length of the event
play(event)
while 1:
timer = time()
remaining = start + duration - timer
if remaining < 0.001:
break
else:
sleep(remaining / 2)
stop(event)
start += duration

IOW, just check the time, wait half the remaining note duration, check the
time again, etc, till you've reached your desired precision level (in this
case 0.001 sec). The halving of each succesive sleep() means that there is
only a handful of calls to time() per note, 5-10 depending on the tempo. (Of
course it could be any fraction, I just pulled that out of a hat; it would
probably be better too if the fraction decremented as the deadline
approached).

Even with this naive algorithm, I'm getting accuracy of <0.001 sec
consistently, up to stupidly high tempos like 3000 bpm (as measured by
inserting a "print remaining" in the loop) without using a separate timing
thread or "sync pulse", and with only a few calls.

Any processes started this way stay in sync, even if individual events are
delayed, because the "start" variable is incremented by the correct (not
actual) duration of the note. (But I guess that's just scheduling,
right? :) )

Obviously there is a limit imposed by arithmetical precision, but I've left
things running all day without seeing a discrepancy.

In terms of audio process communication, I'm using sox in a subprocess to play
samples (and its cheesey synth sound!), and fluidsynth via a socket for midi.
The latter is far superior and seems to handle whatever is thrown at it, and
with the jack driver with realtime priority, the dropouts you mentioned
caused by mouse movements and so on disappear.

Of course, this doesn't prove that using a socket is better; I don't think sox
is jack-aware, and if the subprocesses could have different priorities from
the parent process, that may help with the difficulties we're having with
them. Anyone out there know about that?

I've been putting off getting into Csound, but by all accounts it's _the_
audio language. Good luck with your project, it sounds interesting.

Regards,

john

Hi John.
Very interesting approach, halving the remaining duration. Right now
I'm not working with note-offs, Csound takes care of the duration. I
just need to be able to call the notes at the right times. I've managed
to get all the networking/IPC stuff working, I just need to test the
sched.scheduler functioning. If it isn't accurate enough, as some have
told me it won't be, I'll have to fall back on using Csound to time the
notes; it's a C library, so I think it will be pretty fast.

I have nothing but respect and awe for sox. I chose Csound partially
because I want the app to work cross-platform. Csound and the Csound
API work fine with Jack, but for some reason I got the audio static
even connected to Jack with realtime preemption.
There are ways for the subprocesses to have different priorities; they
inherit the parent's priority, but any later changes made by, e.g.,
os.nice() only affect the parent. So one could run os.nice(-2), then
spawn a subprocess, then run os.nice(4), to leave the child process -2
from the default and the parent process +2.
I also like and respect Fluidsynth, but I'm working on a pretty
microtonal system, and MIDI doesn't have enough microtonal support.

Csound is the shiznet. It's pretty engrossing, though. I've written a
fraction of the amount of music I wrote before since I discovered it!


-Chuckk
 
J

John O'Hagan

[...]
from time import time, sleep

start = time()
for event in music:
duration=len(event) #Really, the length of the event
play(event)
while 1:
timer = time()
remaining = start + duration - timer
if remaining < 0.001:
break
else:
sleep(remaining / 2)
stop(event)
start += duration

Very interesting approach, halving the remaining duration. Right now
I'm not working with note-offs, Csound takes care of the duration. I
just need to be able to call the notes at the right times.

[...]

I'm also using the above code without the "stop(event)" line for playing
non-midi stuff (fixed-length samples for example), which from the sound of
it also applies to your requirement - IOW, you _can_ use it just to start
notes at the right time, because the note-playing loop sleeps till then.

And as I realized after having a good sleep myself, this is only useful for
timing events which occur sequentially and do not overlap, which is how my
program works because it generates the events in real time, but which may be
of no use in your case. But I suppose the same principle could be applied to
reduce the number of calls needed in a separate timing thread.

Unless of course each event contained the start-time of the next event as an
attribute....?

John
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,769
Messages
2,569,580
Members
45,054
Latest member
TrimKetoBoost

Latest Threads

Top