Image Management in ASP.Net Using Azure Blob Storage

Michael Lanzetta

Recently, we were working with a large appliance manufacturer training Convolutional Neural Networks (CNNs) for image recognition, which involved solving a wide variety of problems. Herein, we focus on the end result – when you have a deployed Neural Network ready to score images taken by your client, how do you get the images to the Neural Network?

Image-recognizing Neural Networks tend to be picky about the images they receive, with a fixed size constraint and a constrained number of color channels, all depending on their topology. For our network, we used 224x224x3 for ~150k features, so incoming input images needed to be scaled down to the appropriate size before handing to the NN. Our goal was to store the initial and down-scaled images in blob storage, which would allow us to quickly test new trained nets and new prediction methodologies (tiling the images, various pre-filters, etc.) on the full corpus of previously uploaded images.

We initially uploaded directly into blob storage and then tickled a RESTful endpoint to have it do the resize and prediction, but this caused problems whenever the client or server went down. Orphaned images, multiple resize requests, and extra network traffic as we pulled images from blob storage just to resize them and push them back – there had to be another way! (Cue infomercial music). Could we upload the images to our ASP.Net Web API, resize them there, and then do our predictions from that? Could we do so without cluttering the local file system with in-progress or orphaned files?

Uploading in ASP.Net, Direct to Blob Storage

Uploading files to a web service is typically done in a couple of fashions. One option is to upload via POST to a RESTful endpoint with a blob of binary data in either an agreed-upon format, or a well-known format that’s discoverable via data interrogation. The more typical method is to use the multi-part form data MIME post that has been part of web development for ages. Most client libraries speak this easily, and it allows you to embed multiple files as well as other metadata (filenames in the MIME header, other data in the form fields). We chose this second solution, as it made it easy for us to test out via a simple web form, and integrate into our Xamarin client for use in our multi-platform mobile app.

With ASP.Net, you typically use the MultipartFormDataStreamProvider and then you basically stream your MIME files to disk. We built our own custom StreamProvider that allowed us to upload directly into Azure Blob Storage, and integrate whatever pre/post-filters we needed in the process. Since we were using MemoryStreams to store the in-process image data, it meant a larger memory footprint per request on the server, but meant no local disk access and no orphaned data. It’s a trade-off (long-term stability vs. short-term scalability), but for the purposes of our Proof-of-Concept, it was exactly what we needed. I’ve written up a more detailed post on this whole solution, and released the code on GitHub with the permissive MIT license.

Resizing a Stream

public static Stream ResizeImage(Stream imageStream, int width, int height)
    using (var image = Image.FromStream(imageStream))
        var output = ResizeImage(image, width, height);
        var outputStream = new MemoryStream();
        output.Save(outputStream, image.RawFormat);
        outputStream.Position = 0;
        return outputStream;

/// <summary>
/// Taken from
/// </summary>
public static Bitmap ResizeImage(Image image, int width, int height)
    var destRect = new Rectangle(0, 0, width, height);
    var destImage = new Bitmap(width, height);

    destImage.SetResolution(image.HorizontalResolution, image.VerticalResolution);

    using (var graphics = Graphics.FromImage(destImage))
        graphics.CompositingMode = CompositingMode.SourceCopy;
        graphics.CompositingQuality = CompositingQuality.HighQuality;
        graphics.InterpolationMode = InterpolationMode.HighQualityBicubic;
        graphics.SmoothingMode = SmoothingMode.HighQuality;
        graphics.PixelOffsetMode = PixelOffsetMode.HighQuality;

        using (var wrapMode = new ImageAttributes())
            graphics.DrawImage(image, destRect, 0, 0, image.Width, image.Height, GraphicsUnit.Pixel, wrapMode);

    return destImage;

We can resize a simple JPG downward and even alter the aspect ratio, without damaging the quality that badly:


Turns into


with a simple call to ResizeImage(bananas, 100, 100). This same technique (Stream=>Image=>Bitmap=>Stream) can be used for any number of image pre-processing filters – resize just happened to be the one I needed.


Processing images as part of an ASP.Net workflow is not appropriate for all situations – high-throughput, low-latency websites should beware – but in our case it was exactly what we needed. Keeping the processing fully in memory without offloading to disk kept our servers clutter-free and stable over the long-term, while providing an easy answer for failed requests (side-effect-free idempotency). Abandoned requests could still result in orphaned images in blob storage (e.g. an uploaded full-scale image fails during downscaling), but since our goal was to gather as many uploaded images for subsequent training sessions as possible, this approach delivered exactly what we wanted.

The code in this code story and the linked repo and post can provide the same solution for your site. The ResizeImage code and the various Azure utilities in the repo can be used regardless of whether you’re working with ASP.Net.


Discussion is closed.

Feedback usabilla icon