Having a camera at the tip of our fingertips at all times has disrupted the camera industry. Just ten years ago, it was common to see dads toting a camera bag around Disney World to capture every moment of their child’s first experience with their favorite characters, but you would probably be surprised if you saw one now. Mobile phones have brought photos and video to a device we can quite literally take almost anywhere. When was the last time you saw a camcorder? Even films presented at the Sundance Film Festival have been recorded on a mobile phone.
Cameras and Mobile
As if the built-in camera on phones wasn’t already good enough, mobile apps have made it even better. Apps like Camera+ and VSCO Cam extend the functionality of the camera, and after you’ve taken the perfect selfie, you can add filters and share them with your friends and family with apps like Facebook and Instagram. Apps like Snapchat are taking things a step further and are even changing the way we communicate.
Even if you aren’t trying to disrupt the camera industry, your app may still need to take advantage of a camera for functionality such as logging business receipts or scanning a product barcode. In this post, we’re going to see just how easy it is to add a camera control to your iOS apps with AVFoundation
.
Adding Camera Control
Now that you know your app needs camera capabilities, what’s the right approach for adding that functionality? There are two main approaches. You can take advantage of the camera controls and photo pickers already available for each platform, or you can build your own using AVFoundation
. No one approach is the right one, but here are some guidelines to help you decide which approach is right for your app.
Your app should use the prebuilt camera and media control if:
- You prioritize code sharing over customization.
- Camera or media functionality is just a small part of your overall application.
Your app should use a custom camera control if:
- You prioritize customization over code sharing.
- You don’t want to navigate the user to a new view to take a photo.
- The core experience of the app revolves around the camera.
If you select to use the prebuilt camera and media control, you can use the Media Plugin for Xamarin and Windows, which will allow you to take photos on iOS, Android, and Windows from shared code. Under the hood, this plugin takes advantage of native controls such as UIImagePickerController
on iOS or the IMAGE_CAPTURE
Intent on Android. The remainder of this blog post, though, will detail exactly how to build your own camera control using AVFoundation
.
Building a Custom Camera with AVFoundation
AVFoundation
is a namespace that contains classes for high-level recording and playback capabilities for audio and video on iOS. Many tasks, such as capturing and playing photos or videos, can be done without AVFoundation
using techniques described earlier in this post. However, if you would like to show a camera without having to navigate to another application, you will have to use AVFoundation
.
Download the starter code, which shows a blank user interface with several buttons for switching the camera between front and back, toggling flash, and taking photos. Over the course of this post, we’ll transform this starter code into a fully-functional camera app. Note that the iOS Simulator does not support camera emulation, so all testing must be done on a physical device.
Requesting Permission to Use the Camera
Whether you’re using the Media Plugin for Xamarin or rolling your own camera control, you must first gain permission from the user to use the camera within your application. We can use the AVCaptureDevice.GetAuthorizationStatus
method to know if we already have permission. If permission is not authorized, we can easily request it using AVCaptureDevice.RequestAccessForMediaTypeAsync
as seen below:
async Task AuthorizeCameraUse () { var authorizationStatus = AVCaptureDevice.GetAuthorizationStatus (AVMediaType.Video); if (authorizationStatus != AVAuthorizationStatus.Authorized) { await AVCaptureDevice.RequestAccessForMediaTypeAsync (AVMediaType.Video); } }
Be sure to call AuthorizeCameraUse
in your ViewDidLoad
lifecycle method to ensure we have proper permissions before attempting to display a live stream from the camera.
Showing a Live Stream from the Camera
Now that we have access to the camera, let’s define some fields that AVFoundation
will utilize to display a live stream from the camera. It’s important that these are at the class-level, as we don’t want them being garbage collected:
AVCaptureSession captureSession; AVCaptureDeviceInput captureDeviceInput; AVCaptureStillImageOutput stillImageOutput;
Next, define a new method named SetupLiveCameraStream
where we’ll set up an AVCaptureSession
and set up our user interface to display the stream from the camera.
public void SetupLiveCameraStream () { captureSession = new AVCaptureSession (); var viewLayer = liveCameraStream.Layer; videoPreviewLayer = new AVCaptureVideoPreviewLayer (captureSession) { Frame = this.View.Frame }; liveCameraStream.Layer.AddSublayer (videoPreviewLayer); var captureDevice = AVCaptureDevice.DefaultDeviceWithMediaType (AVMediaType.Video); ConfigureCameraForDevice (captureDevice); captureDeviceInput = AVCaptureDeviceInput.FromDevice (captureDevice); captureSession.AddInput (captureDeviceInput); var dictionary = new NSMutableDictionary(); dictionary[AVVideo.CodecKey] = new NSNumber((int) AVVideoCodec.JPEG); stillImageOutput = new AVCaptureStillImageOutput () { OutputSettings = new NSDictionary () }; captureSession.AddOutput (stillImageOutput); captureSession.StartRunning (); }
The AVCaptureSession
object helps to configure and display the live stream from the camera and passes the information to one or more output objects. We then utilize the AVCaptureVideoPreviewLayer
class to render the video stream from the camera on the UIView
. Next, we create a capture device, which is just a class that we can use to set certain hardware properties of the camera, such as enabling autofocus. The AVCaptureDeviceInput
class is used to capture actual data from the camera. Finally, we wire up our input and output objects and start the capture session.
Next, create a new method named ConfigureCameraForDevice
, which will be used to set certain hardware properties such as autofocus and continuous auto exposure.
void ConfigureCameraForDevice (AVCaptureDevice device) { var error = new NSError (); if (device.IsFocusModeSupported (AVCaptureFocusMode.ContinuousAutoFocus)) { device.LockForConfiguration (out error); device.FocusMode = AVCaptureFocusMode.ContinuousAutoFocus; device.UnlockForConfiguration (); } else if (device.IsExposureModeSupported (AVCaptureExposureMode.ContinuousAutoExposure)) { device.LockForConfiguration (out error); device.ExposureMode = AVCaptureExposureMode.ContinuousAutoExposure; device.UnlockForConfiguration (); } else if (device.IsWhiteBalanceModeSupported (AVCaptureWhiteBalanceMode.ContinuousAutoWhiteBalance)) { device.LockForConfiguration (out error); device.WhiteBalanceMode = AVCaptureWhiteBalanceMode.ContinuousAutoWhiteBalance; device.UnlockForConfiguration (); } }
Finally, add a call to SetupLiveCameraStream
right after the AuthorizeCameraUse
method in ViewDidLoad
. Run the app, and you should have a functioning live camera stream.
Capturing the Photo
When the TakePhotoButtonTapped
event is raised, we can capture the current frame shown from the camera by tapping into the AVCaptureStillImageOutput
object we configured earlier in SetupLiveCameraStream
. Once we have a connection to the camera, we can capture an image and convert the resulting buffer to type NSData
, and finally to a byte array:
async partial void TakePhotoButtonTapped (UIButton sender) { var videoConnection = stillImageOutput.ConnectionFromMediaType (AVMediaType.Video); var sampleBuffer = await stillImageOutput.CaptureStillImageTaskAsync (videoConnection); var jpegImageAsNsData = AVCaptureStillImageOutput.JpegStillToNSData (sampleBuffer); var jpegAsByteArray = jpegImageAsNsData.ToArray (); // TODO: Send this to local storage or cloud storage such as Azure Storage. }
Enhancing the Camera Control
We now have a functioning camera control that streams video from the device’s camera and allows us to take photos. But what about all the fancy things cameras normally do, like supporting flash and both the front and back cameras on the device?
Supporting Flash
To support flash with our custom camera control, we can use the AVCaptureDeviceInput
we wired up earlier to access the device and check to see if the device supports flash. If it does, lock the device’s camera for temporary configuration, alter the flash setting of the device, unlock the camera, and change the display image of the flash icon.
partial void FlashButtonTapped (UIButton sender) { var device = captureDeviceInput.Device; var error = new NSError (); if (device.HasFlash) { if (device.FlashMode == AVCaptureFlashMode.On) { device.LockForConfiguration (out error); device.FlashMode = AVCaptureFlashMode.Off; device.UnlockForConfiguration (); toggleFlashButton.SetBackgroundImage (UIImage.FromFile ("NoFlashButton.png"), UIControlState.Normal); } else { device.LockForConfiguration (out error); device.FlashMode = AVCaptureFlashMode.On; device.UnlockForConfiguration (); toggleFlashButton.SetBackgroundImage (UIImage.FromFile ("FlashButton.png"), UIControlState.Normal); } } }
Supporting Front and Back Cameras
Similarly, to switch between the front (selfie) and back cameras, all we need to do is alter the input source for the AVCaptureSession
to a position from the AVCaptureDevicePosition
enumeration.
partial void SwitchCameraButtonTapped (UIButton sender) { var devicePosition = captureDeviceInput.Device.Position; if (devicePosition == AVCaptureDevicePosition.Front) { devicePosition = AVCaptureDevicePosition.Back; } else { devicePosition = AVCaptureDevicePosition.Front; } var device = GetCameraForOrientation (devicePosition); ConfigureCameraForDevice (device); captureSession.BeginConfiguration (); captureSession.RemoveInput (captureDeviceInput); captureDeviceInput = AVCaptureDeviceInput.FromDevice (device); captureSession.AddInput (captureDeviceInput); captureSession.CommitConfiguration (); } public AVCaptureDevice GetCameraForOrientation (AVCaptureDevicePosition orientation) { var devices = AVCaptureDevice.DevicesWithMediaType (AVMediaType.Video); foreach (var device in devices) { if (device.Position == orientation) { return device; } } return null; }
With that, we now have a complete camera control that streams a live video from the camera, can take photos, supports a flash mode, and switching between front and back cameras.
Wrapping Up
There are many reasons that you may want to incorporate a camera display control into your app. It’s important that you select the approach that is right for your app. Either the Media Plugin for Xamarin if you don’t need fine-tuned control over the camera or use a custom camera control built with AVFoundation
for maximum control over the user experience. In this blog post, we walked through how to create your own custom camera control that supports taking photos, flash, and using both the front and back camera on the device. To learn more, download the full source for the custom camera display control or check out our documentation on AVFoundation or the Media Plugin for Xamarin.
0 comments