music: interview: free jazz improv: Bill Hsu
Bill Hsu is an associate professor in the computer science department at San Francisco State University. He is also a keen free-jazz improv exponent and Mstation met him while he was doing a late summer round of performances and conferences. When he returned home we had an email chat about the likes of .. what is free-jazz improv anyway? ...
You mentioned that you had an early piano playing background which I'm presuming included the usual Classical repertoire.
Yup. I had a fairly traditional Classical training, and also took some jazz piano lessons.
Did you pick up your newer interests as you worked through University towards your computer science Phd?
My PhD had no musical connections, though I did take a couple computer/electronic music classes in school. My listening seemed to move toward more non-traditional music even quite early on. When I was maybe 15, I received a gift collection of classical records. There was some Bartok, Prokofiev etc, which I somehow took to relatively quickly. I also remember buying Berg's Wozzeck when I was in high school. Later it was the noisy side of rock, industrial music, free jazz etc, a pretty typical progression.
And then there was Free Improv! First of all, would you like to define it?
I see free improv as a musical "conversation", mostly without hard restrictions on materials that have been previously agreed upon. It's more of a "practice" than a style. For instance, I was at John Russell's Fete Quaqua in London in August; there were freely improvised pieces that had an abstract jazzy feel, or were noisy and textural, or had a more rock-like sensibility.
Who are (comparitively!) well known artists in this genre?
The early practitioners are usually considered to be Chicago's AACM (Association for the Advancement of Colored Musicians, including Muhal Richard Abrams, Roscoe Mitchell, Joseph Jarman, Anthony Braxton, among others), English improvisers such as Derek Bailey, Evan Parker, John Stevens, the AMM core of Eddie Prevost, Keith Rowe, and John Tilbury, also Peter Brotzmann and Alexander von Schlippenbach in Germany, and too many others to name.
Other than the artists named on the Wiki site, this is a good site for European improvisers:
In the San Francisco area:
Not all of these would identify themselves as primarily free improvisers though.
Are there regular events in San Francisco? I guess the Beanbender's Collective should get a mention here!
While I (and Dan Plonsey) would love a mention of Beanbender's, we haven't put on anything in years. The bayimproviser site lists a number of events, and this is the online version of the calendar we used to help put out (neither of us is involved anymore):
You're interested in real-time performance systems as well as algorythmic accompaniment, and taking in such areas as mood and interaction modelling. In something like mood modelling, do you start with intuitive values and then refine it, or do you more start from some intellectual framework?
I actually work with mood modeling sideways :-). Many improvisers I've worked with prefer not to make a lot of direct references to affect/mood, since it can be pretty subjective. We tend to talk about musical materials in a more abstract way, but still try to be sensitive about the non-musical references that may be evoked in listeners (or performers).
I've studied some mood modeling research papers, and basically pulled some concrete musical aspects of their results to use in my work. For example, I work explicitly with tempo, loudness and timbre, which are the 3 main musical parameter classes that are used in mood modeling. But I don't go beyond that to try to come up with a description of the mood of specific musical materials. That can be tricky especially for music that does not sound much like pop or traditional classical music. For example, my father said after one of my gigs that we sounded "so angry", when that was not how we felt at all!
Is timbre considered to be a component of mood?
It's one musical parameter that is considered strongly tied to mood. For example:
I actually use a lot less information than people who do automatic mood modeling.
In describing a timbre, in computer language terms, what sort of variables do you have?
... continues below ...
Timbre is the most poorly defined of the musical parameters. In my work, I mostly use acoustic roughness as (essentially) a rough indicator of consonance/dissonance or pleasantness/harshness. But to adequately describe and distinguish even a small class of timbres can be quite complicated. Usually several measures are necessary to capture aspects of the sounds in question.
For example, for saxophone sounds, I use mostly roughness, harmonicity of spectrum, noisiness, presence of multiphonics, and presence of sharp attacks. These are higher level descriptions that have to be derived and inferred from a larger set of low level measurements.
With algorythmic accompaniment, are time lags a problem?
It doesn't seem to be in my system. But then I'm not really generating a well-defined "accompaniment"; in improvisation, things are more open-ended, though good musical results can depend on tight timing constraints.
I'm also very careful not to overload my systems; there are components and whole subsystems that I've stopped working with because they take too much CPU power. I'm sure time lag will be a problem if I threw in everything!
People tend to downgrade this in terms of artistic expression but, in fact, whatever happens, the programmer has had a distinct say in what will happen. What do you think?
Many improvisers I work with are very down-to-earth about discussions of concrete musical events and technique. They tend to talk less about "inspiration" and "expression". A sound is the result of a sequence of physical actions. So given that the programmer sets up sequences of actions and sounds, s/he certainly has a lot of control over the musical results.
I do make the distinction between a programmer who builds an instrument/tool, versus a programmer who actually has direct control/influence over the choice of materials, timing of events and the final musical result. The former is more like a traditional instrument builder who builds, say, a violin; the latter is closer to a composer or performer.
I think you use Pure Data mostly. What do you like about it?
It's actually Max/MSP, which is the closed-source commercial version of Pure Data. It's very easy to build quick prototypes and proof-of-concept systems, and there are lots of objects that interface with commonly used devices. I have very limited experience building GUIs, and in Max/MSP and Pd, the GUI basically comes for free.
Could you give us a code fragment to illustrate something that you're doing?
One question that creeps into a lot of Mstation interviews is about controllers for electronic music artists - the fact that watching somebody beetling away quietly behind a laptop screen is fairly uninteresting, and that the interface itself is limited. Do you have any ideas for the future in this line? Totally new "instruments"?
I tend to take a conservative approach to designing new instruments/interfaces. I've been using the same Wacom tablet for years, and I think I use it in a very straightforward, traditional way. I'd wanted an interface that will let me shape sounds in the flexible, tactile way that a saxophonist might play. There are software components that I use with the tablet that haven't changed that much in years; however, my gestural vocabulary has expanded with practice. It's like getting a little better at playing an instrument after years of practice.,
Thanks a lot.
photo by Anthony Galante. Left is James Coleman playing theremin. Right is Bill Hsu.
Bookmark:post to Delicious Digg Reddit Facebook StumbleUpon
Recent on Mstation: music: Vivian Girls, America's Cup, music: Too Young to Fall..., music: Pains of Being Pure At Heart, Berlin Lakes, music: Atarah Valentine, Travel - Copenhagen, House in the Desert