Create a Java Azure Cosmos DB Function Trigger using Visual Studio Code in 2 minutes!

Theo van Kraay

 

Creating event sourcing solutions with Azure Cosmos DB is easy with Azure Functions Triggers, where you can leverage the Change Feed Processor‘s powerful scaling and reliable event detection functionality, without the need to maintain any worker infrastructure. You can just focus on your Azure Function’s logic without worrying about the rest of the event-sourcing pipeline. In this blog, we have some quick how-to videos to get you up and running with Java Cosmos DB Functions Triggers!

If you want to following along, there are some pre-requisites you should have in place:

  • Ideally a Windows 10 Machine.
  • Access to an Azure subscription, and an Azure Cosmos DB account – instructions here.
  • Visual Studio Code installed – instructions here.
  • Azure Storage Emulator installed (make sure this is running before trying to follow the demo) – instructions here.
  • The pre-requisites for Azure Functions in Java with VS Code.
  • Azure Functions Core Tools installed – instructions here (also given in the above link as an option).

Create an Azure Cosmos DB Functions Trigger in Java with VS Code… in 2 minutes!

(code shown below)

The code used in the video is below:

package com.function;

import com.microsoft.azure.functions.annotation.*;
import com.microsoft.azure.functions.*;

/**
 * Azure Functions in Java with Cosmos DB Trigger.
 */
public class Function {

    @FunctionName("cosmosDBMonitor")
    public void cosmosDbProcessor(
            @CosmosDBTrigger(name = "items",
            databaseName = "database", collectionName = "collection1",
            createLeaseCollectionIfNotExists = true,
            connectionStringSetting = "AzureCosmosDBConnection") String[] items,
            final ExecutionContext context) {
                for (String string : items) {
                    System.out.println(string);
                }
        context.getLogger().info(items.length + "item(s) is/are changed.");
    }
}

The local.settings.json file should look something like the below:

{
  "IsEncrypted": false,
  "Values": {
    "AzureWebJobsStorage": "UseDevelopmentStorage=true",
    "AzureCosmosDBConnection": "<PRIMARY CONNECTION STRING for Cosmos DB from Azure Portal>",
    "FUNCTIONS_WORKER_RUNTIME": "java"
  }
}

 

Next, you can turn your function into a simple event sourcing solution, which streams updates and inserts from one collection into another, by replacing the code created in the first demo, with the below:

 

package com.function;

import com.microsoft.azure.functions.annotation.*;
import com.microsoft.azure.cosmosdb.ConnectionMode;
import com.microsoft.azure.cosmosdb.ConnectionPolicy;
import com.microsoft.azure.cosmosdb.ConsistencyLevel;
import com.microsoft.azure.cosmosdb.Document;
import com.microsoft.azure.cosmosdb.rx.AsyncDocumentClient;
import com.microsoft.azure.functions.*;

/**
 * Azure Functions in Java with Cosmos DB Trigger.
 */
public class Function {

    private final String databaseName = "database";
    private final String collectionId = "collection2";
    private AsyncDocumentClient asyncClient;
    private final String targeturi = System.getenv("targeturi");
    private final String targeturikey = System.getenv("targeturikey");

    ConnectionPolicy connectionPolicy = new ConnectionPolicy();

    public Function() {
        asyncClient = new AsyncDocumentClient.Builder().withServiceEndpoint(targeturi)
                .withMasterKeyOrResourceToken(targeturikey).withConnectionPolicy(connectionPolicy)
                .withConsistencyLevel(ConsistencyLevel.Session).build();

    }

    @FunctionName("cosmosDBMonitor")
    public void cosmosDbProcessor(
            @CosmosDBTrigger(name = "items", databaseName = "database", collectionName = "collection1", createLeaseCollectionIfNotExists = true, connectionStringSetting = "AzureCosmosDBConnection") String[] items,
            final ExecutionContext context) {
        connectionPolicy.setConnectionMode(ConnectionMode.Direct);
        for (String string : items) {
            Document doc = new Document(string);
            asyncClient.createDocument("dbs/" + databaseName + "/colls/" + collectionId, doc, null, false).toBlocking().single().getResource();
            System.out.println("moved document: "+string);
        }
    }
}

 

Your local.settings.json would look something like this (note the targeturi and targeturikey added for the target collection):

 

{
  "IsEncrypted": false,
  "Values": {
    "AzureWebJobsStorage": "UseDevelopmentStorage=true",
    "AzureCosmosDBConnection": "<PRIMARY CONNECTION STRING for Cosmos DB in Azure portal>",
    "FUNCTIONS_WORKER_RUNTIME": "java",
    "targeturi": "<URI for target database/collection from Portal>",
    "targeturikey": "<PRIMARY KEY for target database/collection from Portal>"
  }
}

 

Watch the video below to see how to deploy to Azure! To follow along with this video, you should have the pre-requisites from the above video already installed, plus the following:

  • Azure CLI – instructions here.
  • The Cosmos DB Java SDK v2.6.5 by adding below dependency in pom.xml
<dependency>
  <groupId>com.microsoft.azure</groupId>
  <artifactId>azure-cosmosdb</artifactId>
  <version>2.6.5</version>
</dependency>

 

For more information about Azure Cosmos DB’s change feed and it’s use cases, go here!

For the official documentation on creating an Azure Function triggered by Cosmos DB, go here!

Get started

Create a new account using the Azure Portal, ARM template or Azure CLI and connect to it using your favourite tools. Stay up-to-date on the latest Azure #CosmosDB news and features by following us on Twitter @AzureCosmosDB. We are really excited to see what you will build with Azure Cosmos DB!

About Azure Cosmos DB

Azure Cosmos DB is a globally distributed, multi-model NoSQL database service that enables you to read and write data from any Azure region. It offers turnkey global distribution, guarantees single-digit millisecond latency at the 99th percentile, 99.999 percent high availability, with elastic scaling of throughput and storage.

1 comment

Discussion is closed. Login to edit/delete existing comments.

  • Thomas Vandenbon 0

    This article mentions “Event Sourcing” a lot, but it doesn’t do anything related to Event Sourcing.
    What you’re doing here is copying data from one container to another when it’s being added.
    In an article about event sourcing, I would expect the following topics to be handled:
    – How to handle exceptions in a stream?
    – How to handle upcasting of events?
    – How to maintain projections or materialized views?

    In my experience, the hardest problem with using Azure Functions for event sourcing, is the lack of control you have when things go wrong.
    Since things will always go wrong, it makes Azure Functions a poor fit for any production ready system.

Feedback usabilla icon