Add Some Sweet Sound to Your App

Mike Bluestein

Audio has paved the way for the explosion of mobile devices. First through the iPod, later the iPhone, and now Android and Windows Phone as well, portable music, radio and podcasts are among the most popular uses of these amazing devices. Apps built on Xamarin such as Rdio and gMusic are among the top music apps available. Thanks to the power of the Core Audio framework, your apps can create great audio experiences too.


Rdio
gMusic 2

Core Audio is a C API created by Apple. However, Xamarin.iOS users can enjoy the friendly confines of C# to program against it, without the need to abruptly switch between Objective-C and C within the same code base. Rather than dealing with pointer soup and digging around in documentation to explore constants, you can work with object-oriented code, events and strong typing.

Core Audio is a low-level framework that underlays every audio feature in iOS. Higher-level frameworks such as the Media Player framework and AVFoundation build upon Core Audio. These frameworks are suitable for many scenarios and you should use them if they meet your needs. However, for more advanced capability, such as parsing an audio stream to enqueue and subsequently play, the lower-level Core Audio framework is at your disposal.

Streaming Audio

To stream audio using Core Audio involves:

  1. Creating an AudioFileStream to parse the stream.
  2. Retrieving the audio stream, such as over the web using HTTP.
  3. Parsing and decoding the bytes from the audio stream.
  4. Creating an audio queue to play the audio.
  5. Enqueuing audio data in the audio queue’s buffer.
  6. Starting the audio queue.
  7. Freeing the buffer on completion.

Core Audio streaming is particularly useful for scenarios such as playing live audio from the web, as demonstrated in the following screencast and explained below.

[youtube http://www.youtube.com/watch?v=Q47oK2KpC84]

Parsing with AudioFileStream

Streaming with Core Audio involves first creating the AudioFileStream instance with event handlers wired up for the PacketDecoded and PropertyFound events respectively:

audioFileStream = new AudioFileStream (AudioFileType.MP3);
audioFileStream.PacketDecoded += OnPacketDecoded;
audioFileStream.PropertyFound += OnPropertyFound;

To retrieve the stream, use whatever HTTP API you prefer. I used NSUrlSession in this case. Once the bytes are available, parsing and decoding of the audio data happens by calling the AudioFileStream‘s ParseBytes method:

audioFileStream.ParseBytes (buffer, false);

As properties are parsed, the PropertyFound handler will be called. The key property to watch for is AudioFileStreamProperty.ReadyToProducePackets, which indicates that the audio data will be in the forthcoming packets. Therefore, this is an appropriate place to create the OutputAudioQueue:

void OnPropertyFound (object sender, PropertyFoundEventArgs e)
{
  if (e.Property == AudioFileStreamProperty.ReadyToProducePackets) {
    outputQueue = new OutputAudioQueue (audioFileStream.StreamBasicDescription);
    outputQueue.OutputCompleted += OnOutputQueueOutputCompleted;
  }
}

Enqueing Audio Data

The PacketDecoded handler will be called with the audio packets. This is where we can enqueue the audio data in the audio queue’s buffer. To do so we:

  1. Allocate the buffer.
  2. Copy the raw audio data into the buffer.
  3. Enqueue the buffer.

The following code shows how to enqueue audio data:

IntPtr outBuffer;
outputQueue.AllocateBuffer (e.Bytes, out outBuffer);
AudioQueue.FillAudioData (outBuffer, 0, e.InputData, 0, e.Bytes);
outputQueue.EnqueueBuffer (outBuffer, e.Bytes, e.PacketDescriptions);

Playing Audio

To start the audio queue we simply call the Start method of the OutputAudioQueue:

var status = outputQueue.Start ();

If all works well, streaming audio will begin playing. The Start method returns an AudioQueueStatus value, which will be AudioQueueStatus.Ok if the queue successfully started. There are variety of other enum values that may be returned if something goes awry, ranging from an empty buffer to permission issues.

Since we’re dealing with unmanaged code under the covers, we need to clean up the buffer. The OutputCompleted event allows us to wire up a handler for just that. The queue’s FreeBuffer method will release the AudioQueue buffer, as shown below:

void OnOutputQueueOutputCompleted (object sender, OutputCompletedEventArgs e)
{
  outputQueue.FreeBuffer (e.IntPtrBuffer);
}

Core Audio let’s us drop down to gain low-level access to audio, enabling incredible audio experiences. I’m excited to see the audio apps you come up with!

The code from this post is available in my github repo.

Discuss this post in the Xamarin forums.

0 comments

Discussion is closed.

Feedback usabilla icon