Change Feed – Unsung Hero of Azure Cosmos DB

Avatar

Leonard

Azure Cosmos DB is rapidly growing in popularity, and for good reason. This globally distributed, multi-model database service has massively scalable storage and throughput, provides a sliding scale for consistency, is fully and automatically indexed, exposes multiple APIs for multiple data models, and has SDKs for the most popular languages. Throw in a server-side programming model for ACID transactions with stored procedures, triggers, and user-defined functions, along with 99.999% SLAs on availability, throughput, latency, and consistency, and it’s easy to see why Azure Cosmos DB is fast winning the hearts of developers and solution architects alike.

Yet still today, one of the most overlooked capabilities in Azure Cosmos DB is its change feed. This little gem sits quietly behind every container in your database, watches for changes, and maintains a persistent record of them in the order they occur. This provides you with a reliable mechanism to consume a continuous and incremental feed of changes, as documents are actively written or modified in any container.

There are numerous use cases for this, and I’ll call out a few of the most common ones in a moment. But all of them share a need to respond to changes made to an Azure Cosmos DB container. And the first thought that comes to the mind of a relational database developer is to use a trigger for this. Azure Cosmos DB supports triggers as part of its server-side programming model, so it could be natural to think of using this feature to consume changes in real time when you need to.

Unfortunately, though, triggers in Azure Cosmos DB do not fire automatically as they do in the relational world. They need to be explicitly referenced with each change in order to run, so they cannot be relied upon for capturing all changes made to a container. Furthermore, triggers are JavaScript-only, and they run in a bounded execution environment within Azure Cosmos DB that is scoped to a single partition key. These characteristics further limit what triggers can practically accomplish in response to a change.

But with change feed, you’ve got a reliable mechanism for retrieving changes made to any container, all the way back to the beginning of time. You can write code (in your preferred language) that consumes the change feed to process it as needed and deploy that code to run on Azure. This paves an easy path for you to build many different solutions for many different scenarios.

Scenarios for change feed

Some of the more common use cases for change feed include:

  • Replicating containers for multiple partition keys
  • Denormalizing a document data model across containers
  • Triggering API calls for an event-driven architecture
  • Real time stream processing and materialized view patterns
  • Moving or archiving data to secondary data stores

Each of these deserves their own focused blog post. For the broader context of this overview post, however, I’ll discuss them each at high level.

Replicating containers for multiple partition keys

One of the most (perhaps the most) important things you need to do when creating a container is to decide on an appropriate partition key – a single property in your data that the container will be partitioned by.

Now sometimes this is easy, and sometimes it is not. In order to settle on the correct choice, you need a clear understanding of how your data is used (written to and queried), and how horizontal partitioning works. But what do you do when you can’t decide? What if there are two properties that make good choices, one for write performance, and another that’s better for query performance? This can lead to “analysis paralysis,” a scary condition that is fortunately and easily remedied using change feed.

All you do is create two containers, each partitioned by a different partition key. The first container uses a partition key that’s optimized for writes (it may also be appropriate for certain types of queries as well), while the second one uses a partition key optimized for most typical queries. Simply use change feed to monitor changes made to the container as the writes occur and replicate the changes out to the second container.

Your application then writes to the first container and queries from the second container, simple as that! Just bear in mind that this approach is recommended only for read-heavy scenarios, where the one-time cost of replicating each change is exceeded by the savings you get from reducing a great many cross-partition queries into single-partition queries.

Denormalizing a document data model across containers

Developers with a background in relational database design often struggle initially with the denormalized approach to data modeling in the NoSQL world of JSON documents. I personally empathize; from my own experience, I know that it can be difficult at first to embrace concepts that run contrary to deeply engrained practices that span decades of experience in the field.

Data duplication is a case in point, where this is considered a big no-no in the normalized world of relational databases with operational workloads. But with NoSQL, we often deliberately duplicate data in order to avoid expensive additional lookups. There is no concept of a JOIN in any NoSQL database engine, and we can avoid having to perform our own “manual” joins if we simply duplicate the same information across documents in different containers.

This is a somewhat finer-grained version of the previous scenario, which replicates entire documents between two containers. In this case, we have different documents in each container, with data fragments from changed documents in one container being replicated into other (related) documents in another container.

But how do you ensure that the duplicated data remains in sync as changes occur in the source container? Why, change feed of course! Just monitor the source container and update the target container. I show you exactly how to do this in this post.

Triggering API calls for an event-driven architecture

In this scenario, you source events to a set of microservices, each with a single responsibility. For instance, an e-commerce website with a large-scale order processing pipeline. The pipeline is broken up into a set of smaller microservices, each of which can be scaled out independently. Each microservice is responsible for a single task in the pipeline, such as calculating tax on each order, generating tax audit records, processing each order payment, sending orders off to a fulfillment center, and generating shipping notifications.

Thus, you potentially have N microservices communicating with up to N-1 other microservices, which adds significant complexity to the larger architecture. The design can be greatly simplified if, instead, all these microservices communicate through a single bus; that is, a persistent event store. And Azure Cosmos DB serves as an excellent persistent event store, because the change feed makes it easy to broker and source these events to each microservice. Furthermore, because the events themselves are persisted, the order processing pipeline itself is very robust and incredibly resilient to failures. You can also query and navigate the individual events, so that this data can be surfaced out through a customer care API.

Check out this real-world case study of how Jet.com implemented an event-sourced architecture using the Azure Cosmos DB change feed with hundreds of microservices. There is also a video on this from Microsoft Build 2018.

Real time stream processing and materialized view patterns

The change feed can also be used for performing real time stream processing and analytics. In the order processing pipeline scenario, for example, this would enable you to take all the events and materialize a single view for tracking the order status. You could then easily and efficiently present the order status through a user-facing API.

Other examples of so called “lambda architectures” include performing real time analytics on IoT telemetry or building a scoring leader board for a massively multiplayer online video game.

Moving or archiving data to secondary data stores

Another common scenario for using the change feed involves replicating data from Azure Cosmos DB as your primary (hot) store to some other secondary (cold) data store. Azure Cosmos DB is a superb hot store because it can sustain heavy write ingestion, and then immediately serve the ingested records back out to a user-facing API.

Over time as the volume of data mounts, however, you may want to offload older data to cold storage for archival. Once again, change feed is a wonderful mechanism to implement a replication strategy that does just that.

Consuming the change feed

So how do you actually work with the change feed? There are several different ways, and I’ll conclude this blog post by briefly explaining them.

Direct Access

First, you can query the change feed directly. This raw approach works but is the hardest to implement at large scale. Essentially, you first need to discover all the container’s partitions, and then you query each of them for their changes. You also need to persist state metadata; for example, a timestamp for when the change feed was last queried, and a continuation token for each partition. Plus, you’ll want to optimize performance by spawning multiple tasks across different partitions so that they get processed in parallel.

If all this sounds like a lot of work, it is. Which is why you’ll almost certainly want to leverage the Change Feed Processor (CFP) library instead.

Change Feed Processor (CFP) Library

The CFP library provides a high-level abstraction over direct access that greatly simplifies the process of reading the change feed from all the different partitions of a container. This is part of the Azure Cosmos DB .NET SDK, and it handles all the aforementioned complexity for you. It will automatically persist state, track all the partitions of the container, and acquire leases so that you can scale out across many consumers.

To make this work, the CFP library persists a set of leases as documents in another dedicated Azure Cosmos DB container. Then, when you spin up consumers, they attempt to acquire leases as they expire.

All you do is write an observer class that implements IChangeFeedObserver. The primary method of this interface that you need to implement is ProcessChangesAsync, which receives the change feed as a list of documents that you can process as needed. No partitions to worry about, no timestamps or continuation tokens to persist, and no scale-out needs to concern yourself with.

However, you still need to write your own host, and deploy the DLL with your observer class to an Azure app service. Although the process is straightforward, going with Azure Functions instead provides an even easier deployment model.

Azure Functions

The simplest way to consume the change feed is by using Azure Functions with a Cosmos DB trigger. If you’re not already familiar with Azure Functions, they let you write individual methods (functions), which you deploy for execution in a serverless environment hosted on Azure. The term “serverless” here means without also having to write a host and deploy an Azure app service to run your code.

Azure Functions are invoked by one of any number of triggers. Yes, Azure Functions also uses the term triggers, but unlike triggers in Azure Cosmos DB, an Azure Functions trigger always fires when its conditions are met. There are several different triggers available, including the one that we care about here, the Azure Cosmos DB trigger. This Azure Functions trigger binds to configuration that points to the container you want to monitor changes on, and the lease collection that gets managed by the CFP library under the covers.

From there, the process is identical to using the CFP library, only the deployment model is dramatically simpler. The Azure Functions method that is bound using the Azure Cosmos DB trigger receives a parameter with the same list of documents that a CFP library’s observer class does in its ProcessChangesAsync method, and you process change feed data the way you need to just the same. Plus, since the Azure Cosmos DB trigger attribute for Azure Functions is a wrapper around the CFP Library, you get all the same benefits for a stateful and scalable solution.

Pull Model

Using the CFP Library or Azure Functions, you get a push model, where your code sits and waits for changes that get pushed to it by Azure Cosmos DB in real-time. There is now also a new pull model available, where you can control the pace of consumption. Similar to the (non-recommended) direct approach, your code queries the change feed to “pull” the changes from it, rather than waiting for changes to get pushed. While the push model usually provides a better approach, there are some cases where the pull model can be easier to work with. For one, it has the unique ability to query for changes by partition key. Also, by assuming control of change feed consumption, it’s easier for those one-time data migration scenarios as well.

Get started

Hopefully, this blog post opened your eyes to the change feed in Azure Cosmos DB. This powerful feature creates all sorts of exciting possibilities across a wide range of use cases. In future posts, I’ll walk you through the steps of building a change feed solution using the CFP library, Azure Functions, and the pull model. So, stay tuned!

Want to build event-driven microservices using change feed for Azure Cosmos DB? Check out my new Pluralsight course.

About Azure Cosmos DB:

Azure Cosmos DB Change feed documentation

8 comments

Comments are closed. Login to edit/delete your existing comments

  • Avatar
    Jonathan LEI

    Thanks for sharing Leonard!

    I’m particularly interested in the scenario of using Cosmos as a persistent event store. We’re currently running an event-sourced system with a self-hosted Kafka cluster (within the same vnet as the microservices) as event store on Azure, and are looking for potentially better solutions.

    I came across the change feed a while ago when reading about Cosmos DB, and thought it was a great fit for using an event store. We haven’t pulled the trigger yet because our system is latency sensitive, and there appears to be very little written about the change feed’s latency characteristics. Since you’re blogging about it, would you mind kindly shedding some light on it? Like what kind of latency shall we expect between a change is made to a container, and the update can be consumed from the change feed? Also, I read somewhere that the change feed client uses polling for fetching updates. That sounds pretty bad to me for latency-sensitive workloads.

    For comparison’s sake, the average latency in our current system from sending an event, to the event getting processed in the consumer is less than 1 millisecond.

    I’ll certainly run some benchmarks myself before investing into Cosmos. But it would be perfect if you could share more on this with us. Thanks in advance 🙂

    • Avatar
      Lenni Lobel

      Hi Jonathan,

      Thanks for sharing your use case. The Azure Cosmos DB Change Feed makes an awesome persistent event store, and it can process changes in real-time. To the developer, the Change Feed Processor (CFP) Library (with or without Azure Functions) gives you a push model. Behind the scenes, the CFP Library does indeed poll for changes (using an interval that you can specify) – but only as long as there aren’t any. So if there are no changes, it will wait that specified interval before checking again. But then, once it sees one or more changes and you process it/them, it then immediately checks for new changes without waiting for another polling interval. Thus, it “drains” all current changes with zero latency before it resumes polling. Because there’s no delay after processing previous changes and before checking for new changes, you can stream data into Cosmos DB, and do a good job of saturating your throughput as you process changes in real-time.

      Hope this helps, and please don’t hesitate to follow-up with more questions!

      • Avatar
        Jonathan LEI

        Hi Leonard,

        Thanks for the prompt response. Really appreciated.

        I decided to run some simple tests, and created a free tier account with a database of 400 RU. I have one thread firing 5,000 items in a for loop, one at a time, and a change feed processor sitting there calculating the time it takes from the time `CreateItemAsync` is called and the time the item is processed as latency.

        To minimize the impact of polling, I set the poll interval to 1ms. I’m also running the test from an Azure VM that’s in the very same region as the database account.

        I tested with 1,000, 5,000, and 10,000 inserts, and got around 60ms average latency in all cases.

        Yes I do realize I can improve throughput by batching inserts, but what I’m trying to test here is latency, since that’s what matters most for our scenario. (Also it’s probably more “correct” to fire the next item only after the previous one is processed, but I’m trying to balance out the effect of polling)

        The real world latency will probably be worse because it’s unrealistic to set poll interval to 1ms in production. This worsens the matter especially when events are infrequent (where the interval is actually awaited).

        A latency (defined here as the time it takes for the processor to get the event since it’s fired) of around 60ms is probably good enough for non-latency-sensitive workloads. Unfortunately we have a much tougher requirement on latency for our scenario, and can’t use the Cosmos change feed as our primary event store. Will definitely explore other use cases for the amazing tech though. Thank you again for sharing!

  • Avatar
    Christopher Parker

    A great article like always. Really enjoyed your most recent plural sight course on Cosmos DB. I never thought about the TTL to help delete with change feed. Great tip. I can’t wait until that is natural in the change feed.

  • Avatar
    Malte Müller

    Thank you for the article.

    I am interested in the costs of using the Change Feed. Does it use usual Request Units or how is this actually working? Does it use Request Units for the polling you mentioned in an answer to Jonathan LEIs comment?

    • Mark Brown
      Mark BrownMicrosoft employee

      Yes, Change Feed consumes a small amount of RU/s when it polls for new items. Then if it finds new items it will consume them as the code you wrote reads them off the Change Feed. But these are efficient point reads. There is also the lease collection as well but here too the load is minimal.