Andrew Bulhack

Andrew has done some interesting work with tclmidi in creating dance/trance type music. In this way of working the music isn't exactly specified and the computer makes some decisions about what to do. Here he talks about how he did it.

related links: tclmidi Does a Rock Implement Every Finite-State Automaton? Timidity++ Melbourne, Australia

m-station: We'd better start by giving your Napster tune a plug (even tho it was done on a Mac with Cubase :) It got quite high in the charts I hear. What is the title and the URL? Andrew: It's at; the title is Metallica Ate My Napster. Though this will change, as I just got mail from saying that they're disabling songs with third-party names in the titles. It'll probably be called Lawyers Ate My Title or somesuch... heh heh Now to Linux ... /vmunix is the group name at - like to tell us a little about it? /vmunix is a name I use for computer-generated music experiments. It's short for 'cat /vmunix >/dev/audio'. (I came up with it back before everything went Linux, back when I was mostly playing around with UNIX workstations at university, which is why it's not 'cat /vmlinuz >/dev/dsp'.) In 1995, I found Tclmidi, an extension to the Tcl scripting language which adds MIDI data manipulation functions. Having this, I decided to test a hypothesis I had; that one could write programs to generate certain types of electronic dance music without human intervention. I started working on some scripts, which mutated in slightly more experimental directions along the way, though in the rhythm patterns, for example, you can still hear the dance/techno/house foundations of this project. when you say 'mutate' do you mean you had the code change the code or did the idea mutate? The idea mutated. First I developed code for generating things such as drum patterns, chord progressions, arpeggios and such, and from that I added higher-level code, for switching in/out various lines, fading them, replacing them, and later for doing things like gradually switching in all elements one by one and then cutting all but one back. These constructs I called gestures, and I used a finite-state automaton of sorts to sequence them. (Revision c of the script, used to generate tracks whose names started with 'c', had gestures.) As I added these elements, one by one, the music began to take on a more textured, experimental quality, diverging from straight dance music. Would it be fair, in this instance, to call a finite-state automaton a piece of code that will make choices from within a limited set of possibilities? Is this implimented with some sort of rand function or are the choices more 'if/then/else'? It's a bit more specific. A finite-state automaton keeps track of which state it's in, and from each state, there are zero or more paths, which lead to other states (and, in some cases, actions). The FSA starts off in an initial state. Depending on which state it's in, it will play things as they are, switch elements in/out, fade elements, mutate elements or such. At the end of a gesture, it changes states, selecting one of the possible states it can get to from its current state. ...I ran several generations of the script, making a large number of MIDI files; the choicest 7 ended up on the tape. I used a Gravis UltraSound card to play the MIDI files into a tape recorder. The original tape wasn't named 'vmunix'; in fact, it didn't have a name. It was a cassette, containing the seven tracks now on the web site. The only explanatory material was the source listing of one generation of the script (decommented for brevity) on the case liner. In 1995, about 30 copies were made; they were distributed privately. A second run of the tape was made in 1997; this time, it was released under the vmunix name, and given the designation "vmunix00". The rerelease is not taken from the original tape, but rather directly from the files played onto the tape. Since 1997, a program named Timidity has come out; this is basically a wavetable synthesizer which uses Gravis UltraSound patches. Using this, I remastered the MIDI files to clean audio files, entirely in the digital domain; I then burned a CD, made MP3s of it and put them on, mostly for posterity. There is a CD on sale there (made on demand by, and the booklet of that does contain a printed source listing, in the tradition of the original. timidity is very nice - a saviour if you have something like an OPL13 on board (like I used to have). With tclmidi did you start with midi sequences from somewhere else or did you make up raw midi and feed it in? And the process from there - was it algorythmic? It's algorithmic. Basically, the music was created by generating components and putting them together. For example, snare drum patterns were generated by randomly placing snare drum hits in a 16-event bar. Some events had a higher probability of containing a hit (worked out by hand). The melodies were done with a vaguely guitar-like paradigm: a chord progression (with chords consisting of six offsets added to base note values, much like fret positions) was generated semi-randomly (and somewhat naively; I didn't put in any real musical theory there), and things such as bass lines and arpeggios consisted of indices into the chord (i.e., string numbers), which were mapped onto the progression. Ah right, so in the initial stages you'd set it up so it has the outlines of a particular genre - so concievably, with the right sounds, you could go genre-hopping. Not so much sounds, as routines for generating elements, and putting them together. heh heh, for some reason C&W popped into my head - but that would just be a matter of using a different patch set or soundfont or whatever (and no, I've never heard of a C&W patch set :) In tclmidi is it possible to dynamically load patches? tclmidi has nothing to do with patches. It just writes the MIDI file to disk. In the scripts, I've assumed a General MIDI patch set, which is what the GUS patches are. You could do something C&Wish with those sounds, possibly (there are guitars and drums, and perhaps even a banjo somewhere). Like to give us a code sample to see what it looks like? This is the procedure which generates a pattern of snare drum hits; it does this by putting one on the 2nd and 4th beat of every bar, and then randomly adding hits according to the probabilities in the list there. proc makeSnareRhythm {} { set acoustic_snare 38 set snare_events {} set a_snare_events {} for {set i 96} {$i<384} {incr i 192} {lappend snare_events [list $i 24 127]} set additional_snares [lindex [exec random 2] 0] set ps [uniq [lsort -integer \ [weightedRandom {0 3 0 0 0 0 0 6 0 4 0 0 0 0 0 4} \ $additional_snares]]] foreach p $ps {lappend a_snare_events [list [expr $p*24] 24 127]} return [list [list $acoustic_snare \ [mergeDrums $snare_events $a_snare_events]] ] } And here is the FSA which controls gestures: # a transition is a list consisting of a gesture and a destination state. # a state is a list of transitions # state 0 is the start state, from whence gFadeIn may be called; state 1 # is the default state. State 2 is when everything is playing and is really # loud. State 3 is state 2 without drums. 5 and 6 are a buffer to force # meandering set fsa { {{gFadeIn 7}} {{gMeander 1} {gMeander 1} {gMeander 1} {gChangeOff 1} {gMetamorphose 1} {gMeander 1} {gChangeOff 1} {gMetamorphose 1} {gNormalise 1} {gNormalise 1} {gDrumsIn 1} {gNormalise 1} {gCrescendo 2} {gCutAlmostAll 4}} {{gNoDrums 3} {gCutAlmostAll 4}} {{gMassMutate 1} {gCutAlmostAll 4}} {{gChangeOff 5}} {{gMeander 6}} {{gMeander 1}} {{gDrumsIn 1} {gDrumsIn 1} {gMeander 6} {gMeander 5}} } That looks like the God part. Something like that; the high-level control of the structure anyway. Each gesture is a Tcl procedure which generates zero or more cycles and optionally changes global variables (such as patterns, which parts are playing, &c). Here is the "fade in" gesture, that occupies the first 16 bars of a piece: proc gFadeIn {} { global m_allow p_allow allowWeights do_fade puts "gFadeIn:" for {set i 0} {$i<4} {incr i} { puts $m_allow ; puts $p_allow ; addOneCycle # find a good element to turn on set ind [weightedRandom $allowWeights] while {[allowedp $ind]} {set ind [weightedRandom $allowWeights]} frobAllow $ind ; if {[fadep $ind]==1} {setFadeState $ind "in"} set allowWeights [lreplace $allowWeights $ind $ind 0] } ; return 4 } So concievably, once you had a reasonable library of gestures and FSA's you could have a tool which interfaced with them (a higher level tool) that could set broad parameters (ie a genre) and set it going. Possibly. The system I designed didn't use much in the way of music theory (because I didn't know much of the formal theory), but you could theoretically design a system that makes well-formed (if not necessarily good) examples of music in any formulaic genre. Do you think you'll make more music with tclmidi? Probably not in the near future. I'm working on a system of my own, in C++, which will generate music directly to audio, allowing me to integrate DSP effects into the composition process. Though that's probably not going to yield results very soon, because of other commitments. Working with Cubase and MIDI keyboards is a vastly different way of doing things. Do you have any comments about working either way? Yes; with Cubase, it is a process of composition, where I shape the music directly, often working by intuition. Writing scripts, however, is a much more analytical process. In the latter case, I do not actually author the music itself, but merely define the rules that generate it. It is, in some ways, a means of exploring the boundaries of musical formalisms, and testing assumptions about formulae. Thanks a lot.

 home  music  news  opinion  software  tips  email