One-line Algorithmic Music in XNA

Last week I found an interesting video on YouTube, accompanied by an article:

I liked the idea so much that I decided to give it a try on XNA. I also saw a JavaScript implementation online, where the user could write the code directly into an input field, and the application would generate the corresponding sound clip for you. This left me wondering about two points in particular:

  1. Can I get the XNA Audio API to work on a Windows Form application as a standalone, i.e. without having a Game instance or even graphics at all?
  2. Is there a way to let the user write the code into a text box and hear the result without having to recompile the application?

The answer to both questions is yes. Here’s how I did it.

Using the XNA Audio API as a standalone library

This one was particularly easy to solve. I started by creating a Windows Forms application and manually adding a reference to the Microsoft.Xna.Framework assembly. This was enough to grant me access to everything I needed from XNA, in particular the DynamicSoundEffectInstance class.

There’s one catch though. If you’re not using the Game class (such as in this case), XNA expects you to call the FrameworkDispatcher.Update() method periodically. I’ve read that around 20 times a second is enough so I went with that. You also need to call that method at least once before starting audio playback, or your application will crash.

Here’s how I solved it. Note: If you’re not familiar with XNA’s dynamic audio API then check one of my earlier articles such as this: Creating a Basic Synth in XNA 4.0 – Part II.

  1. Created a new thread for the audio.
  2. On that new thread, I started by calling FrameworkDispatcher.Update() once.
  3. Then I enter a loop where I call FrameworkDispatcher.Update() again, and submit my audio buffers.
  4. I added a Thread.Sleep(50) at the end of the loop to make it loop roughly 20 times per second.

Here’s the relevant part of the code. When the form has finished loading, I create my audio buffer, a DynamicSoundEffectInstance object, and start the audio thread:

And this is the thread method. The comments should make it self-explanatory:

To make the audio thread quit when the application exists, I did this:

Finally, here’s what I did to submit the audio buffers. I’ll talk about the GetSample() method later, since it’s linked to the runtime compiler side of the application (the part which lets you write and listen to your own code in real-time). Also notice that in the for-loop I also increment an integer variable called “_time”. This is the value that will correspond to the “t” on our formulas:

And just to make it complete, here’s all of the member variables used. Unlike my earlier synth articles, these algorithms work directly with bytes, so I only needed to create a single byte buffer to do the processing:

That’s all for the audio generation part of the application. Now for the part that takes your one-line algorithm (as a string) and turns it into code that you can execute!

The C# Runtime Compiler

Dynamic languages usually have a method that lets you execute code in the form of a string (frequently named something like “eval”). However, C# does not have such a mechanism. What it does have however, is a way to compile and assemble C# code at runtime. This means that you can write a complete class as a string, feed it through the runtime compiler, and then have access to it like any other class in your project (by making use of some Reflection).

So, imagine we had a class like this:

Then all we’d need to do is call the static AudioGenerator.Generate(_time) method using our “_time” variable, and store the resulting value in our audio buffer. Here’s how you can compile and assemble a class like this at runtime. Read through the comments for a better understanding of what I did:

By the way, GetErrorMessage() is just an helper method I made to join all the error messages in a single string for output. It is defined like so:

By the end of it, if calling SetAlgorithm(algorithm) did not find any errors on your code, then the Type object “_generatorType” will be pointing at the type of the class you compiled at runtime. Using this Type object, you can simply use Reflection to invoke the Generate method on it:

And that’s all. I was worried that this might turn out to be too slow for real-time usage, but it seems to work okay, at least on my machine.

Results

Here’s a video of the final result as usual, and the source code:

Source Code
3 Comments.
  1. Trance says:

    Cool stuff indeed!

    Another similar app is Sound Toy by Inigo Quilez; it’s implemented in javascript using the web audio API:
    http://www.iquilezles.org/apps/soundtoy/index.html

    And a video of him toying around with it:
    http://www.iquilezles.org/blog/?p=1511

    • Wow, thanks for the links, that’s awesome. :)

      I’ll take the chance to add something too. On this video (http://www.youtube.com/watch?v=tCRPUv8V22o) at 5:20 they actually managed to encode a melody and make something that really sounds like music. But it’s done in JavaScript though.

      Now I’m wondering if I could somehow coerce C# into interpreting something more complex like that. Probably with a converter to bridge the differences between both languages.

  2. d2r2 says:

    You should not forget to convert to the XNA-16-bit-pipe … otherwise it sounds a bit weird (if not intended) :)

Leave a Reply