Csound is a low level tool for making sounds/music on a computer. It has a large, active, community of people who use it. The following interview was sparked by the announcement of a Csound techno mailing list and we thought we'd go and investigate....
For some background information about Csound, follow these links: www.csounds.com/whatis/index.html" www.csounds.com/cshistory/index.html
Please note that Csound isn't Open Source because it is distributed under a modified MIT license that restricts its source code to "academic and research use".
Mailing list archive: plot.bek.no/pipermail/csoundtekno
Some sample questions:
* What is Csound? How are people using it?
* Why did you start a new mailing list when the Csound community already
had an active one?
* With commercial programs like Max and Reaktor available, why do you
think people would use a non-commercial program like Csound?
* What advantages does Csound provide to a techno musician?
* What information is missing for a techno musician to get started with
Csound?
* How do you see people using Csound in the future?
(Thanks to Kevin Conder for the links above.)
Mstation: I'll start right at the beginning by asking what Csound is and does.
Iain Duncan: Well, I'm probably not the best person to answer that, but I'll give it a crack. Csound is essentially a programming language ( well scripting or mark up language if we want to get picky ) for digital audio, including software synthesis, effects, and other digital manipulation. The main difference between Csound and things like Reaktor, PD, or Max/MXP, is that it is a text based programming language with similarities to basic, C, and assembly. It is one of the many offshoots of the original Music X audio programming languages which were the very first forms of real computer music tools, where *all* the sound was generated on computer. Csound itself was developed by Barry Vercoe at MIT, but is now maintained by a fairly large community of developers who offer their time very generously. So one of it's big advantages is that it is free! And it is growing all the time.
As to what it does, well pretty much whatever you want it to. The truth is that by now the major digital audio programming environments are all so powerful that one can make any one of them do pretty much everything the others do. So I would say, basically everything you've ever heard a synthesizer or effects unit do, though some of them require some pretty advanced programming! The real differences lie in how one does it. So for certain tasks, and for certain ways of working, Csound might make the most sense, or PD, or SuperCollider, or perhaps even raw C. The user has to find out how they want to work to sensibly make the choice, and many people are finding good reasons to use more than one environment depending on the task, or even to combine them, which is becoming more feasible every day with cross environment tools like CsoundVST, CsoundMax, and the ~csound objects for Max/MSP and PD. Another advantage of Csound is that a user can add to it by programming new features in C, and then have this feature eventually added to Csound as whole if it is useful to others.
Csound was originally used to render audio in deferred time, which is a source of many strengths and weaknesses. It's a strength because it has been designed to use as much power as you can give it! So if you want to really push your machine ( or just wait while it compiles a piece in deferred time ) you can get a phenomenal level of quality and control. And this means that many very well respected researchers in the fields of synthesis and DSP are constantly contributing cutting edge new features to csound, whether or not machines are ready to run this in real time. Because of this, we get to play with synthesis techniques a ways before they will be seen in most commercial applications that are aimed at real time use only, and more specifically aimed at real time use where the user is probably trying to get a lot happening at once. For instance, you will not see an instrument in say Reaktor or Reason that does only one simple thing but uses most of a 2 gigahertz processor; some direct convolution possibilities spring to mind. But Csound users might have been playing with that in deferred time for years already.
On the other hand, real time csound is fairly young, and is still being improved. Nontheless, it is already very good at real time, it's just that there is a lot more coding involved in building nice real time instruments. The performance in real time is excellent because of the way you can choose at a very low level where you're willing to cut quality corners and where you aren't. The fact that csound allows one to process signals at a slower sampling rate if they are to be used for courser levels of control helps a lot too, as does the lack of a default graphic interface. It is really not very difficult to simulate midi instruments for real time use, but I think the interesting stuff happens when you ask yourself, "What can't I make my midi instruments do in real time?" If you're willing to ask those questions, and put in the coding to find solutions, csound will reward you! Because it was not designed around midi we can make instruments and fx units that aren't hampered by the keyboard oriented note on / note off paradigm. We can stop thinking in terms of notes being only short single events, and sound being in notes. I think this is especially important because these days so many electronic music makers fall in to that trap. It is very easy to do when only working with midi gear or with software synthesizers that are designed as midi gear! Sure you can sometime make your midi gear behave a bit like a modular synthesizer, but it's a real pain, and you have a lot of limits, along with a *really* low control resolution.
With Csound we can do more of what one can do with a real modular synth set up, where sound and controls can be either continuous streams or events ( or some hybrid of the two ). This is really quite a profound difference, and I think accounts for a lot of what I hear as the decline in quality of a lot of dance music. Most producers now are happy to do things quickly and easily, but nothing they make sounds like the synthesizer track "On The Run", on Dark Side of The Moon, from 1973! In my opinion, sculpting really interesting sounds should make you sweat. With a modular synthesizer, or languages that let you get that level of control, we can spend our time really making interesting *sounding* music and be rewarded for the effort and time we put in. The big advantage is that once learnt, it is a lot easier and a whole lot cheaper to experiment with new signal flows, new techniques, and new methods of control, using tools like Csound than with tens of thousands of dollars worth of synthesizer modules!
A few people have commented that Csound is quite 'difficult'. What would you say are helpful areas to know before you tackle it (with particular thoughts about making techno)?
What I like to say is that Csound is difficult to learn, but easy to use. Once you know how to do something, it is very easy and quick to do it again, add more duplicate instruments, tweak it a little, change the control flow, etc. And many of the techniques that seem hard to learn open all kinds of doors once mastered. Csound lets one get down to a very low level, which means that sometimes there might seem to be an awful lot of code involved for something you've never even worried about before with midi gear or soft synths with a lot of user presets or defaults. But the other side of that coin is that you can pretty much control anything anyway you want at a very high resolution, and you can build incredible complicated control matrixes if you so desire. You don't have to if you don't want to, but if you think about how a lot of accoustic instruments work, and how many subtleties of performer control affect how many sound attributes, it make you realize that a guitar or a flute or a voice sound wonderful because they are so complex!
The biggest hurdle for new users is that they are probably used to commercially written manuals, and nice step by step lessons. The world of Csound documentation is a bit of a free for all. Some is very accesible, and some is pretty glib. But really, we can't complain if a highly skilled programmer who has just freely donated weeks and weeks of their time to a great new feature doesn't also feel like writing a manual aimed at beginners! So much of what I am trying to do with the new list and with the upcoming CsoundTechno site is to make the initial stage of learning smoother. I feel that the Csound community as a whole will benefit from a broader user base, and that a great csound techno community can happen if we make it easier for people to get over that initial hump. Right now the best front door is Dr Richard Boulanger's site www.csounds.com, or his book "The Csound Book", but these are not aimed more specifically at techno. I hope to provide a new door to Csound that helps makers of electronic dance music get Csound doing what they want as quickly and easily as possible. There is no shortage of great documentation and examples out there, much freely available, and also available in book form, but often the documentation assumes the readers has a certain level of background in say synthesis, or DSP, or programming, or Csound itself. So while learning Csound will help you learn synthesis, learning synthesis will also help you learn Csound.
I would say the most important thing is to understand how synthesis methods and sampling methods work on a broad level. This would include how these things work on old analog machines, on samplers, on FM synthisizers, on analog drum machines, etc. For the raw beginner I highly recommend Steve DeFurias books on synthesis and sampling. It also really helps to understand how one works analog machinery compared to midi gear so as to help see the possibilities if we are willing to look beyond midi. Of course understanding midi is pretty important too if you are planning on using csound sequenced via midi or controlled via midi. The other really important topic to bone up on is how digitial audio works in general, or later perhaps how computer synthesis and DSP works. There are lots of great books on digital audio, and the two big books on computer music available right now are the Roads book and the Dodge book. You can find a lot just searching for tutorials on line too. However, none of this is absolutely necessary to start learning Csound! You can make great sounds even while only understanding the csound techniques on a fairly superficial level.
OK, maybe we can illustrate that point while expanding the techno thing. Let's say we've got Csound and our knowledge is superficial or non-existent, What first steps will we take to get some grooves happening?
I would hate to mislead people about what Csound is for or how it works, so I'll start by saying that it's not really something you would use to get some grooves going per se. It's not a sequencer or drum machine, though you could build one in csound if you wanted to, but that would be a fairly complicated project. So what I'll do is answer the question as, "What might we use Csound for in the making of dance music, given that we are beginners in csound and not experts in synthesis?"
Well, really what we do in Csound is build instruments, which could be conventional synthesizers, samplers, and fx units, or just things that do something new to sound, either generated in csound or taken from an external sound file. What Csound really excels at in this regard is in fine control. It is pretty easy to specify *exactly* what frequency you want your processes to happen at for instance. So for dance music, this allows a lot of really neat sounding effects whereby some processes that normally we can't control super precisely are tuned to very precise rhythms or patterns. Good examples would be any of the delay based effects process, audio filtering, enveloping, amplitude modulation, or frequency modulation.
Probably the easiest thing to get started with is just trying to make new sounds for use as samples. You can make a pretty simple instrument with any of the oscil family of opcodes, and get some neat new sounds by experimenting with the table generation functions. What happens is the table generation functions make a table, and the oscil plays through it. We have some easy additive synthesis possibilities using Gen 10 and 9, where we can specify the number and strengths of the various harmonics present in the waveform table. If we use more than one oscillator, we can have some chorus by giving them very slightly different pitches. Then we could filter our signal with any of the filter opcodes. After that, I think the next thing to add would be some enveloping, and maybe some lfos. We can add an envelope to any control we can thing of, so we could have amplitude envelopes, envelopes on secondary sound sources, envelopes controlling filter cut off and resonance, you name it. The same goes for lfos.
For effects, lets take phasing and flange. In csound we need to use an always on instrument for delay based effects, so we might have two instruments involved. We could have a source instrument that makes some sound like the above, or maybe plays a wave file that we recorded some other way. Then instead of just outputting the sound we send it to a global variable. Then an always-on instrument reads in the global variable, does it's thing, outputs the sound, and then clears the global variable. ( If you have used PD, Max/MSP, or Reaktor, the global variables essentially replace the wire lines. ) We need to use this always-instrument because we can't have the delayed sound being abruptly cut-off each time a note ends in the source instrument. Inside our always-on instrument we can use any number of interesting opcodes for effects, but a lot of fun can be had with just delay opcodes and the aforementioned filter opcodes. We could have our delay times very precisely mapped to our tempo, or we could have them hooked up to lfos that are in turn very precisely mapped to tempo. Same goes for pan. So a good starter project might be an instrument that does some swoopy sounding flanging at a precise tempo. By having the delay time ( say from 4 ms to 20ms ) swoop up and down controlled by the lfo, we will get a swooshing sound that is in time to the lfo frequency. We could add a filter to the sound too, with just a little bit of cut, also swooshing around by controlling the filter cutoff and filter resonance with the lfo. This is a pretty good area to explore too as it tends to be an area where most modern midi synths are weak. They often have no external lfo that just keeps doing its thing without being retriggered by the beginning of every note. Furthermore, if we use the phasor opcodes and table opcodes, ( instead of the more simple lfo opcode ) we can have our lfo wave be whatever is in a table we make. So it need not be a simple triangle, or sine, or whatever, it could be some complicated zig zagging thing, kind of like the pattern controlled filter in Rebirth. Except that with csound, the pattern control can be applied to anything you can thing of!
A more complex example might be something like my "mixer channel" for the live electronic rig I am making all in csound. My chorus processes are actually in the instruments before they get to the channel, where I have pitch modulation and pulse width modulation happening in the synths, with whatever tempo of lfo I want. Then I have another lfo controlling filter cutoff and filter resonance for the source instrument, to get that classic swooping lfo over an acid line effect that analogs excel at. Once the signal reaches the mixer channel, I have one side being delayed by about 4-8 milliseconds, which really adds to the stereo imagery. Then I have a nifty filtered delay unit. The delay unit actually consists of five different delay lines, so it is not really a feedback delay but a multi-tap instead. And each delay line gets filtered seperately. This sounds really cool, because I can vary the filter cutoff and resonance of the delayed sound only, with of couse another couple of lfos related precisely to tempo. Further, I can vary the actual filter cutoff and resonance of each delay line relative to the others by putting in some mathematical operations on the control values. So I could have say, the volume die down by 20% for each tap, the filter cutoff die down by a 30%, but the resonance increase by 10%. Instead of the amplitude just fading, we hear the sound fade, but it gets more resonant as it goes away, until the filter cutoff and amplitude are quite low, but the resonance is into self-oscillation. When this is added to a pretty simple synth line with a fair bit of space, there is all kinds of interesting pulsating background sound that is subtly adding to the groove, and making a really cool ambience.
Well, I could go on forever like this, but hopefully that gives you some ideas that would be useful for dance music!
You can hear a sample example I made at www.riotnrrd.com/~iain/2prtsynth1.mp3
On the example track there are dynamic changes happening as we go along. How does that happen?
On that particular clip there are a number of things going on. First of all, it's an example of my csound sequencer software as well, so I am actually programming that line on the fly as you hear it and it's simultaneously being recorded directly to disk. This is a large project I've been working on for quite some time that will ultimately include a real time sequencer, mixing board, fx, and direct to disk recording for studio work and improvised live gigs. So some of the changes heard are just me changing the pattern as it goes along.
In addition, there are a number of lfo's going on that clip. Let's see if I can remember them all! The sound uses a four oscillator synth, 3 main oscillators plus a square wave sub oscillator. The three main ones are detuned slightly, and are pulse waves. Two of the pulse waves are having their duty cycles modulated by lfos. Then they go through a filter and the filter cutoff point and filter resonance are also being modulated by an lfo. The instrument then goes through a filter delay unit, where there is a four tap delay, and the four delay tips each pass through more filters. The filter cutoff of the subsequent delays is scaled down on each tap, but also is modulated by another lfo, as is the resonance. This is why sometimes the delay lines seem to swoop towards you and then away again.
Thanks very much Iain.
Editor's note: At this point we had both people moving house. There's a fair bit more that can be said here so we might add to this in the future. Dec. 02 note: Part 2 is here.
Bookmark:
post to Delicious Digg Reddit Facebook StumbleUpon
Recent on Mstation: music: Vivian Girls, America's Cup, music: Too Young to Fall..., music: Pains of Being Pure At Heart, Berlin Lakes, music: Atarah Valentine, Travel - Copenhagen, House in the Desert
front page / music / software / games / hardware /wetware / guides / books / art / search / travel /rss / podcasts / contact us