{"id":6465,"date":"2026-02-18T09:00:17","date_gmt":"2026-02-18T17:00:17","guid":{"rendered":"https:\/\/devblogs.microsoft.com\/azure-sql\/?p=6465"},"modified":"2026-02-23T11:36:53","modified_gmt":"2026-02-23T19:36:53","slug":"data-api-builder-chaining","status":"publish","type":"post","link":"https:\/\/devblogs.microsoft.com\/azure-sql\/data-api-builder-chaining\/","title":{"rendered":"Federating Databases with Data API Builder Chaining"},"content":{"rendered":"<p>For decades, DBAs relied on linked servers to stitch data together. If you needed data from two places, you wired them up and moved on. It worked. It was straightforward. It felt native to SQL.<\/p>\n<p>But what if linked servers are not an option? What if policy blocks them? What if one of the systems is not SQL or lives in another cloud? How do you cross engines, environments, and ownership boundaries without turning your architecture into a science project?<\/p>\n<p>In most modern enterprises, the limitation is not SQL. It is usually more hidden things like governance, separation of duties, or risk management policy. Application developers often can&#8217;t alter schemas, enable cross database features, or introduce new trust relationships. They typically need to introduce composition at the app or API level without refactoring databases.<\/p>\n<h3>Keep it Simple<\/h3>\n<p><a href=\"https:\/\/devblogs.microsoft.com\/azure-sql\/wp-content\/uploads\/sites\/56\/2026\/02\/linked-servers-dab.drawio.svg\"> <img decoding=\"async\" width=\"511\" height=\"431\" src=\"https:\/\/devblogs.microsoft.com\/azure-sql\/wp-content\/uploads\/sites\/56\/2026\/02\/linked-servers-dab.drawio.svg\"\n alt=\"Diagram showing a Data API Builder chaining pattern where one database and its API act as an integration hub federating access to multiple downstream databases through additional Data API instances\"\n class=\"aligncenter size-full wp-image-6468\"\n style=\"width:100%; height:auto;\"\n role=\"img\" \/> <\/a><\/p>\n<p>This is where Data API Builder chaining becomes super interesting. Slightly unconventional, yes, but extremely practical. DAB can already target multiple data sources within a single configuration. In many scenarios, API level composition is sufficient. This is next level.<\/p>\n<blockquote>\n<p>Learn more about Data API builder: <a href=\"https:\/\/aka.ms\/dab\/docs\">https:\/\/aka.ms\/dab\/docs<\/a><\/p>\n<\/blockquote>\n<p>Now, with <code>sp_invoke_external_rest_endpoint<\/code> in SQL, composition can move into the database engine itself. That shifts the architectural question from \u201ccan we federate?\u201d to \u201cwhere should federation live?\u201d In this article, we deliberately push it into SQL and chain DAB instances through REST. We aren&#8217;t overlooking that there are many options, but if your hands are tied, here&#8217;s a nice one.<\/p>\n<h2>The scenario<\/h2>\n<p>Imagine an online store with three databases. DB1 is a SQL catalog. DB2 is a Postgres inventory system managed by a warehouse vendor. DB3 is Cosmos DB holding product relationships derived by some backend machine learning system.<\/p>\n<p><a href=\"https:\/\/devblogs.microsoft.com\/azure-sql\/wp-content\/uploads\/sites\/56\/2026\/02\/three-data-sources.drawio.svg\"> <img decoding=\"async\" width=\"471\" height=\"131\" src=\"https:\/\/devblogs.microsoft.com\/azure-sql\/wp-content\/uploads\/sites\/56\/2026\/02\/three-data-sources.drawio.svg\"\n alt=\"Diagram showing three federated data sources: a SQL product catalog, a Postgres inventory database, and a Cosmos DB machine learning system\"\n class=\"aligncenter size-full wp-image-6483\"\n style=\"width:100%; height:auto;\"\n role=\"img\" \/> <\/a><\/p>\n<p>Of course, we want one call: hit DB1 and ask for a product. Behind the scenes, though, we enrich that response with inventory from DB2 and recommendations from DB3.<\/p>\n<pre><code class=\"sql\">CREATE OR ALTER PROCEDURE dbo.sp_GetProduct\n    @productId INT,\n    @userId INT\nAS\nBEGIN\n    SET NOCOUNT ON;\n\n    -- TODO: fetch product details from DB1\n\n    -- TODO: federate inventory data from DB2\n\n    -- TODO: federate related products from DB3\n\n    -- TODO: shape and return final result\nEND\nGO\n<\/code><\/pre>\n<h2>The synchronous approach<\/h2>\n<p>Synchronous federation means enrichment happens as part of the request. There is no staging or precomputation. SQL calls downstream APIs, merges the results, and returns a unified payload. The upside to this approach is freshness, but the tradeoff is serialization. SQL does not fan out in parallel here. Instead, it calls DB2 and waits, then calls DB3 and waits. Every downstream delay becomes user facing latency, but the value proposition can be worth a short progress bar.<\/p>\n<p>And then there is failure. Networks stall or APIs time out. Your real design decision is not whether failure occurs, it is how to handle it and how much of that failure you allow to surface. The good news here is that SQL has options, solid options you can use to create more dependable operations.<\/p>\n<h3>dbo.sp_GetProductInventory_FromAPI<\/h3>\n<p>Assume DB2 exposes a DAB endpoint:<\/p>\n<blockquote>\n<p>{DB2}\/api\/getinventory\/productid\/{productid}<\/p>\n<\/blockquote>\n<p>To call it, we register a credential and endpoint once, then invoke by name.<\/p>\n<pre><code class=\"sql\">-- Register the credential so we can use it by name\nCREATE DATABASE SCOPED CREDENTIAL Db2ApiCredential\nWITH IDENTITY = 'Managed Identity';\nGO\n\n-- Register the endpoint so we can call it by name\nEXEC sys.sp_create_external_rest_endpoint\n    @name = 'Db2InventoryApi',\n    @url = 'https:\/\/db2-api.contoso.com',\n    @credential = 'Db2ApiCredential';\nGO\n\n-- Call the registered endpoint with a custom relative path\nDECLARE @productId INT = 42;\nDECLARE @productString NVARCHAR(20) = CAST(@productId AS NVARCHAR(20));\nDECLARE @path NVARCHAR(4000) = '\/api\/getinventory\/productid\/' + @productString;\nDECLARE @response NVARCHAR(MAX);\n\nEXEC sp_invoke_external_rest_endpoint\n    @endpoint = 'Db2InventoryApi',\n    @method = 'GET',\n    @path = @path,\n    @response = @response OUTPUT;\nGO\n<\/code><\/pre>\n<p>Since the API is Data APi builder, we know the response comes back as a consistent JSON structure.<\/p>\n<pre><code class=\"json\">{\n  \"value\": [\n    {\n      \"productId\": 42,\n      \"quantityAvailable\": 128,\n      \"warehouse\": \"DEN-01\"\n    }\n  ]\n}\n<\/code><\/pre>\n<p>JSON is not a result set, but SQL turns it into one quickly with <code>OPENJSON<\/code>. In a step you can start to use unstructured data in the same way you use your relational data. Clean and consistent, this is an important step you can ensure happens.<\/p>\n<pre><code class=\"sql\">DECLARE @inventoryJson NVARCHAR(MAX) = @response;\n\nSELECT\n    inv.productId,\n    inv.quantityAvailable,\n    inv.warehouse\nFROM OPENJSON(@inventoryJson, '$.value')\nWITH\n(\n    productId INT '$.productId',\n    quantityAvailable INT '$.quantityAvailable',\n    warehouse NVARCHAR(50) '$.warehouse'\n) AS inv;\n<\/code><\/pre>\n<p>Then again, if you only need a scalar result, <code>JSON_VALUE<\/code> is even simpler.<\/p>\n<pre><code class=\"sql\">DECLARE @inventoryJson NVARCHAR(MAX) = @response;\nDECLARE @jsonPath NVARCHAR(100) = '$.value[0].quantityAvailable';\n\nDECLARE @quantityAvailable INT =\n    CAST(JSON_VALUE(@inventoryJson, @jsonPath) AS INT);\n\nSELECT @quantityAvailable AS quantityAvailable;\n<\/code><\/pre>\n<p>This isn&#8217;t a new pattern. It works.<\/p>\n<h3>Handling failure<\/h3>\n<p><code>sp_invoke_external_rest_endpoint<\/code> is synchronous and blocks. There is no parallel fan out, and that means you must be intentional about timeouts and fallback. Start with <code>@timeout<\/code> and decide how long you are willing to wait, then wrap it in TRY CATCH. In every situation, it&#8217;s you that decides what failure really means. A partial result could be fine. Again, it&#8217;s going to be up to you.<\/p>\n<pre><code class=\"sql\">DECLARE @response NVARCHAR(MAX);\nDECLARE @quantityAvailable INT = NULL;\n\nBEGIN TRY\n    EXEC sp_invoke_external_rest_endpoint\n        @endpoint = 'Db2InventoryApi',\n        @method = 'GET',\n        @path = @path,\n        @timeout = 5,\n        @response = @response OUTPUT;\n\n    IF @response IS NOT NULL\n    BEGIN\n        SET @quantityAvailable =\n            CAST(JSON_VALUE(@response, '$.value[0].quantityAvailable') AS INT);\n    END\nEND TRY\nBEGIN CATCH\n    SET @quantityAvailable = NULL;\nEND CATCH;\n\nSELECT @quantityAvailable AS quantityAvailable;\nGO\n<\/code><\/pre>\n<p>You are not eliminating failure here; you are just bounding it. You define the latency ceiling and fallback behavior. SQL shapes the final contract returned to the caller. Then, rinse and repeat this pattern for DB3 and assemble your final result.<\/p>\n<h2>The asynchronous option<\/h2>\n<p>Moving federation off the request path changes the performance profile entirely. Instead of enriching inside sp_GetProduct, you refresh local cache tables on a schedule or in response to events. sp_GetProduct becomes a fast read over local state.<\/p>\n<h3>Caching the downstream results<\/h3>\n<p>To make this work, all you really need are landing tables. One for inventory and one for recommendations, depending on what external data you plan to cache. This keeps the request path simple and insanely fast, but it adds a few more moving parts.<\/p>\n<h3>Refreshing the cache out of band<\/h3>\n<p><code>sp_invoke_external_rest_endpoint<\/code> still performs the integration work. It just runs inside a refresh procedure instead of inside sp_GetProduct.<\/p>\n<h3>Elastic Jobs<\/h3>\n<p>In Azure SQL Database, Elastic Jobs are the native scheduling mechanism. Azure SQL does not include SQL Server Agent in the engine. Elastic Jobs fill that role as a managed scheduler that executes TSQL on a defined cadence.<\/p>\n<blockquote>\n<p>You can create and schedule elastic jobs that could be periodically executed against one or many Azure SQL databases to run Transact-SQL (T-SQL) queries and perform maintenance tasks. Learn more: <a href=\"https:\/\/learn.microsoft.com\/en-us\/azure\/azure-sql\/database\/elastic-jobs-overview?view=azuresql\">MS Learn Docs<\/a><\/p>\n<\/blockquote>\n<p>Here you provision a job agent, define the schedule, and run your stored procedures. That&#8217;s it. Once in place, the flow is predictable and your refresh procedures update cache tables. sp_GetProduct reads only local data or partly local data, depending on your final solution.<\/p>\n<p>Your basic refresh logic does not change. Only the trigger changes.<\/p>\n<pre><code class=\"sql\">EXEC dbo.sp_RefreshProductInventoryCache @productId = 42;\nEXEC dbo.sp_RefreshProductRecommendationsCache @productId = 42, @userId = 1001;\nGO\n<\/code><\/pre>\n<p>Now, our recommendation data is a good thought experiment. Inventory scales linearly with product count, so recommendations are multiplied by user count. That can grow fast. At 10,000 products and 100,000 users, refreshing every combination is not realistic. Narrow the surface area to what actually drives value or consider caching a little of this and a little of that.<\/p>\n<h2>Conclusion<\/h2>\n<p>Federation does not require linked servers. With Data API Builder and <code>sp_invoke_external_rest_endpoint<\/code>, SQL can compose across databases, engines, and environments. You can do this <strong>synchronously<\/strong> and keep everything fresh but couple latency and failure to the request, or <strong>asynchronously<\/strong> with Elastic Jobs that isolate variability and keep your request path fast. But which path and how much of either is going to be your call.<\/p>\n<p>In either case, SQL stays the composition layer. Data API builder is your bridge. Remember that DAB can target one data source or many. The diagram may show multiple API servers, but one DAB instance can federate several backends as long as authentication aligns.<\/p>\n<blockquote>\n<p>Data API builder does not execute cross database JOINs inside the engine. It lets you query across databases. The difference is where the JOIN happens.<\/p>\n<\/blockquote>\n<p>Federated data is enriched data, and enriched data makes your app better. Check it out!<\/p>\n","protected":false},"excerpt":{"rendered":"<p>For decades, DBAs relied on linked servers to stitch data together. If you needed data from two places, you wired them up and moved on. It worked. It was straightforward. It felt native to SQL. But what if linked servers are not an option? What if policy blocks them? What if one of the systems [&hellip;]<\/p>\n","protected":false},"author":96788,"featured_media":6471,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[1,597,576],"tags":[409,560,713,317],"class_list":["post-6465","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-azure-sql","category-data-api-builder-2","category-rest","tag-api","tag-data-api-builder","tag-federation","tag-linked-server"],"acf":[],"blog_post_summary":"<p>For decades, DBAs relied on linked servers to stitch data together. If you needed data from two places, you wired them up and moved on. It worked. It was straightforward. It felt native to SQL. But what if linked servers are not an option? What if policy blocks them? What if one of the systems [&hellip;]<\/p>\n","_links":{"self":[{"href":"https:\/\/devblogs.microsoft.com\/azure-sql\/wp-json\/wp\/v2\/posts\/6465","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/devblogs.microsoft.com\/azure-sql\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/devblogs.microsoft.com\/azure-sql\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/azure-sql\/wp-json\/wp\/v2\/users\/96788"}],"replies":[{"embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/azure-sql\/wp-json\/wp\/v2\/comments?post=6465"}],"version-history":[{"count":1,"href":"https:\/\/devblogs.microsoft.com\/azure-sql\/wp-json\/wp\/v2\/posts\/6465\/revisions"}],"predecessor-version":[{"id":6507,"href":"https:\/\/devblogs.microsoft.com\/azure-sql\/wp-json\/wp\/v2\/posts\/6465\/revisions\/6507"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/azure-sql\/wp-json\/wp\/v2\/media\/6471"}],"wp:attachment":[{"href":"https:\/\/devblogs.microsoft.com\/azure-sql\/wp-json\/wp\/v2\/media?parent=6465"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/azure-sql\/wp-json\/wp\/v2\/categories?post=6465"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/azure-sql\/wp-json\/wp\/v2\/tags?post=6465"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}