September 12th, 2014

Announcing the 0.6.0-beta preview of Microsoft Azure WebJobs SDK

We are releasing another preview of the Microsoft Azure WebJobs SDK, which was introduced by Scott Hanselman. To read more about the previous preview, read this announcement post.

This release has the same general feature set as 0.5.0-beta as well as a few new exciting ones.

Download this release

You can download the WebJobs SDK from the NuGet gallery. You can install or update these packages from the NuGet gallery using the NuGet Package Manager Console, like this:

Install-Package Microsoft.Azure.WebJobs –Pre

If you want to use Microsoft Azure Service Bus triggers, install the following package:

Install-Package Microsoft.Azure.WebJobs.ServiceBus -Pre

What is the WebJobs SDK?

The WebJobs feature of Microsoft Azure Web Sites provides an easy way for you to run programs such as services or background tasks in a Web Site. You can upload and run an executable file such as an .exe, .cmd, or .bat file to your web site while running these as triggered or continuous WebJobs. Without the WebJobs SDK, connecting and running background task requires a lot of complex programming. The SDK provides a framework that lets you write a minimum amount of code to get common tasks done.

The WebJobs SDK has a binding and trigger system which works with Microsoft Azure Storage Blobs, Queues and Tables as well as Service Bus. The binding system makes it easy to write code that reads or writes Microsoft Azure Storage objects. The trigger system calls a function in your code whenever any new data is received in a queue or blob.

Updates in this preview

Table Ingress.

One of the offerings in the SDK is to bind to Azure Storage Tables. In this release the SDK allows you to Ingress data into Azure Tables. Ingress is a common scenario when you are parsing files stored in blobs and storing the values in Tables such as CSV readers. In these cases the Ingress function could be writing lots of rows (million in some cases).

The WebJobs SDK makes it possible to implement this functionality easily and allows add real time monitoring capabilities such as number of rows written in the table so you can monitor the progress of the Ingress function.

The following function show how you can write 100,000 rows into Azure Table storage.

Code Snippet
  1. public static class Program
  2. {
  3.     static void Main()
  4.     {
  5.         JobHost host = new JobHost();
  6.         host.Call(typeof(Program).GetMethod("Ingress"));
  7.                 }
  8.     [NoAutomaticTrigger]
  9.     public static void Ingress([Table("Ingress")] ICollector<Person> tableBinding)
  10.     {
  11.         // Loop to simulate Ingressing lots of rows.
  12.         // You would replace this with your own logic
  13.         // of reading from blob storage and write to Azure Tables.
  14.         for (int i = 0; i < 100000; i++)
  15.         {
  16.             tableBinding.Add(
  17.             new Person()
  18.             { PartitionKey = "Foo", RowKey = i.ToString(), Name = "Name" }
  19.             );
  20.         }
  21.     }
  22. }
  23. public class Person
  24. {
  25.     public string PartitionKey { get; set; }
  26.     public string RowKey { get; set; }
  27.     public string Name { get; set; }
  28. }

When you run this function and view the function in the dashboard, you will the following snapshot. The dashboard will show in real time how many rows are written in the table called “Ingress”. Since this is a long running function the dashboard shows an “Abort Host” button which allows you to cancel a long running function.

IngressInProcess

When the Ingress function completes successfully, then the dashboard displays a success message as shown below.

IngressComplete

In the above example, the Ingress function was invoked through host.Call(). You can use this pattern to call this Ingress program on a schedule. You can also call the Ingress function when a new blob is uploaded in a container. For eg. if you have a background processing program which parses files stored in blob storage and writes the data into tables then you can do something like below:

Code Snippet
  1. public static class Program
  2. {
  3. static void Main()
  4. {
  5.     JobHost host = new JobHost();
  6.     host.RunAndBlock();
  7. }
  8. public static void CSVParsing(
  9. [BlobTrigger(@"table-uploads\{name}")] TextReader input,
  10. [Table("Ingress")] ICollector<Person> tableBinding)
  11. {
  12.     // This is psuedo code showing how you can parse your CSV files
  13.     // and store them into Azure Tables
  14.     IEnumerable<Person> rows = ParseUsingMyCSVParser<Person>(inputStream);
  15.     foreach (var row in rows)
  16.     {
  17.         tableBinding.Add(row);
  18.     }
  19. }
  20. }
  21. public class Person
  22. {
  23.     public string PartitionKey { get; set; }
  24.     public string RowKey { get; set; }
  25.     public string Name { get; set; }
  26. }

As a developer now you implement Ingress scenarios fairly easily and also get real time monitoring on the dashboard without having to write any diagnostics code yourself.

Apart from Ingress scenarios, you can also do the following with Azure Tables:

  • Read a single entity.
  • Enumerate a partition.
  • Bind to IQueryable/ IEnumerable to get a list of entities.
  • Create, Update and Delete entities.

Please see the following sample for more information: https://github.com/Azure/azure-webjobs-sdk-samples/tree/master/BasicSamples/TableOperations

Sending multiple messages on a queue.

Starting with this version, you can use ICollector.Add() to send multiple messages to a queue.

Note: In the previous version of the SDK you were using ICollection, which is removed. Please use ICollector going forward.

Another behavior change is now as soon as you do Add(), the SDK will write the message on a queue. In the previous version the SDK would wait until the function completed before it wrote all the messages to a queue.

The following code shows how you can send multiple messages to a queue.

Code Snippet
  1. public static class Program
  2. {
  3. static void Main()
  4. {
  5.     JobHost host = new JobHost();
  6.     host.RunAndBlock();
  7. }
  8. public static void WriteMultipleQueueMessages(
  9. [QueueTrigger("queue")] string message,
  10. [Queue("outputqueue")]ICollector<string> output)
  11. {
  12.     // Process queue message and write multiple messages to the outputqueue
  13.     output.Add("message1");
  14.     output.Add("message2");
  15. }
  16. }

 

Samples

Samples for WebJobs SDK can be found at https://github.com/Azure/azure-webjobs-sdk-samples

    • You can find samples on how to use triggers and bindings for blobs, tables, queues and Service Bus.
    • There is a sample called PhluffyShuffy which is an Image processing Website where a customer can upload pictures which will trigger a function to process those pictures from blob storage.

Documentation

Deploying WebJobs with SDK to Azure Websites

Visual Studio 2013 Update 3 with Azure SDK 2.4 added Visual Studio Tooling support to publish WebJobs to Azure Websites. For more information, see How to Deploy Azure WebJobs to Azure Websites

Known Issues when migrating from 0.5.0-beta to 0.6.0-beta

ICollector instead of ICollection

In the previous version of the SDK you were using ICollection, which is removed. Please use ICollector going forward starting this release.

Another behavior change is now as soon as you do Add() in this case, the SDK will write the message on a queue. In the previous version the SDK would wait until the function completed before it wrote all the messages to a queue.

Give feedback and get help

The WebJobs feature of Microsoft Azure Web Sites and the Microsoft Azure WebJobs SDK are in preview. Any feedback to improve this experience is always welcome.

If you have questions that are not directly related to the tutorial, you can post them to the Azure forum, the ASP.NET forum, or StackOverflow.com. Use #AzureWebJobs SDK for Twitter and the tag Azure-WebJobsSDK for StackOverflow.

Category
ASP.NET

0 comments

Discussion are closed.