{"id":6928,"date":"2026-04-21T11:52:07","date_gmt":"2026-04-21T18:52:07","guid":{"rendered":"https:\/\/devblogs.microsoft.com\/azure-sql\/?p=6928"},"modified":"2026-04-21T11:52:07","modified_gmt":"2026-04-21T18:52:07","slug":"the-polyglot-tax-part-4","status":"publish","type":"post","link":"https:\/\/devblogs.microsoft.com\/azure-sql\/the-polyglot-tax-part-4\/","title":{"rendered":"The Polyglot tax \u2013 Part 4"},"content":{"rendered":"<h1>The Agent-Ready Database: Security, Backup, and MCP<\/h1>\n<h3>Part 4 of 4 \u2013 The Multi-Model Database Series<\/h3>\n<hr \/>\n<p><em>This is the final post in a four-part series on multi-model databases in SQL Server 2025 and Azure SQL &#8211; exploring how the optimizer, storage engine, and security layer treat each data model as a first-class citizen under one roof.<\/em><\/p>\n<p>In <a href=\"https:\/\/devblogs.microsoft.com\/azure-sql\/the-polyglot-tax\/\">Part 1: The Polyglot Tax<\/a>, we described the trajectory: you spin up a database, point an agent at it, and start building fast. The complexity comes later &#8211; JSON, graph, vectors, analytics &#8211; and each new requirement tempts you to spin up another database. In <a href=\"https:\/\/devblogs.microsoft.com\/azure-sql\/the-polyglot-tax-part-2\/\">Part 2: When JSON Met Graph<\/a>, we proved that native JSON types and graph <code>MATCH<\/code> syntax compile into a single execution plan alongside relational joins. In <a href=\"https:\/\/devblogs.microsoft.com\/azure-sql\/the-polyglot-tax-part-3\/\">Part 3: Vectors, Analytics, and the End of ETL<\/a>, we added DiskANN vector search and columnstore analytics &#8211; five data models, one stored procedure, one transaction boundary.<\/p>\n<p>But here is where many &#8220;multi-model&#8221; stories stop. They show you the features and skip the hard parts &#8211; the parts that determine whether your architecture survives contact with production. Security across all those data models. Backup and recovery that is actually consistent. An API layer that agents can call without you hand-coding middleware for every endpoint.<\/p>\n<p>Today we close the series with those hard parts. They are not as flashy as DiskANN or columnstore, but they are the reason you can actually trust a single-engine architecture at scale.<\/p>\n<h2>1 Unified Security: One Policy Across All Models<\/h2>\n<p>In <a href=\"https:\/\/devblogs.microsoft.com\/azure-sql\/the-polyglot-tax\/\">Part 1<\/a> we counted five separate auth systems in a polyglot stack. Let us see what &#8220;one security model&#8221; actually looks like in practice.<\/p>\n<h3>1&#046;1 Row-Level Security Across Data Models<\/h3>\n<p>Consider a multi-tenant SaaS application. Every table has a <code>TenantID<\/code> column. We want to guarantee that Tenant A never sees Tenant B&#8217;s data &#8211; regardless of which data model the query uses.<\/p>\n<p>First, define the filter function:<\/p>\n<pre><code class=\"sql\">CREATE FUNCTION dbo.fn_TenantFilter(@TenantID INT)\nRETURNS TABLE\nWITH SCHEMABINDING\nAS\nRETURN SELECT 1 AS fn_result\nWHERE @TenantID = CAST(SESSION_CONTEXT(N'TenantID') AS INT);\n<\/code><\/pre>\n<p>This function checks whether the row&#8217;s <code>TenantID<\/code> matches the tenant ID stored in the session context. <code>SESSION_CONTEXT<\/code> is set at connection time by the application (or Data API Builder) &#8211; the user never controls it directly.<\/p>\n<p>Now apply it across all tables, regardless of data model:<\/p>\n<pre><code class=\"sql\">CREATE SECURITY POLICY TenantIsolation\nADD FILTER PREDICATE dbo.fn_TenantFilter(TenantID)\n    ON dbo.Customers,               -- Relational\nADD FILTER PREDICATE dbo.fn_TenantFilter(TenantID)\n    ON dbo.Events,                  -- JSON data\nADD FILTER PREDICATE dbo.fn_TenantFilter(TenantID)\n    ON dbo.Relationships,           -- Graph edges\nADD FILTER PREDICATE dbo.fn_TenantFilter(TenantID)\n    ON dbo.Embeddings               -- Vector data\nWITH (STATE = ON);\n<\/code><\/pre>\n<p>From this point forward, every query against these tables &#8211; relational joins, JSON path queries, graph traversals, vector similarity searches &#8211; is automatically filtered by tenant. The calling code does not need to include <code>WHERE TenantID = @id<\/code> in every query. The engine injects the filter predicate into the execution plan before any data leaves the storage engine.<\/p>\n<p>Here is what this means for security audits. In a polyglot system, the auditor asks: &#8220;Prove that Tenant A cannot access Tenant B&#8217;s data.&#8221; You need to demonstrate this for each database independently &#8211; the relational database&#8217;s RLS, the document store&#8217;s field-level access control, the graph database&#8217;s node-level restrictions, and hope that your vector store supports metadata filtering. Each system has different configuration, different failure modes, different edge cases.<\/p>\n<p>With a unified security model, the answer is one policy, one proof, one explanation. The filter predicate is compiled into the execution plan. It is not optional. It cannot be bypassed by switching to a different query syntax or data model.<\/p>\n<h3>1&#046;2 Beyond RLS: Layered Permissions<\/h3>\n<p>Row-Level Security handles <em>which rows<\/em> a caller can see. But a unified security model goes further.<\/p>\n<p><strong>Stored procedures as the permission boundary.<\/strong> When stored procedures are the interface &#8211; as we recommend in Section 6 &#8211; you grant <code>EXECUTE<\/code> on those procedures and nothing else. The caller has no direct <code>SELECT<\/code>, <code>INSERT<\/code>, <code>UPDATE<\/code>, or <code>DELETE<\/code> permissions on the underlying tables. If an agent&#8217;s credentials are compromised, the attacker can only call the procedures you exposed, with the parameters those procedures accept. No ad-hoc queries. No schema discovery.<\/p>\n<pre><code class=\"sql\">-- The agent can only call these procedures - nothing else\nGRANT EXECUTE ON dbo.FraudCheck TO agent_fraud_checker;\nGRANT EXECUTE ON dbo.GetCustomerContext TO agent_fraud_checker;\n-- No SELECT, INSERT, UPDATE, DELETE on any table\n<\/code><\/pre>\n<p><strong>Dynamic Data Masking<\/strong> protects sensitive columns &#8211; email addresses, phone numbers, account balances &#8211; from roles that should not see full values. Unlike RLS, which filters rows, masking returns the row but obscures column values. An analytics agent that needs aggregate patterns but not individual PII sees masked data by default.<\/p>\n<p><strong>Always Encrypted<\/strong> keeps sensitive columns encrypted end-to-end. The database engine never sees the plaintext. Even a DBA with full server access cannot read the data. For compliance-heavy workloads, this is the strongest guarantee the engine offers.<\/p>\n<p>These layers compose. A single query can hit RLS (row filtering), Dynamic Data Masking (column obfuscation), and stored procedure permissions (surface restriction) simultaneously &#8211; all enforced by the engine, all configured in one place.<\/p>\n<h3>1&#046;3 Testing It<\/h3>\n<pre><code class=\"sql\">-- Set context to Tenant 100\nEXEC sp_set_session_context @key = N'TenantID', @value = 100;\n\nSELECT * FROM Customers;      -- Returns only Tenant 100 customers\nSELECT * FROM Events;          -- Returns only Tenant 100 events\nSELECT * FROM Relationships;   -- Returns only Tenant 100 graph edges\nSELECT * FROM Embeddings;      -- Returns only Tenant 100 vectors\n\n-- Switch context to Tenant 200\nEXEC sp_set_session_context @key = N'TenantID', @value = 200;\n\nSELECT * FROM Customers;      -- Returns only Tenant 200 customers\n-- Same tables, same queries, different results. No code changes.\n<\/code><\/pre>\n<p>The security policy is invisible to the application. Your queries do not change. Your stored procedures do not change. The engine handles it.<\/p>\n<h2>2 Unified Backup and Recovery<\/h2>\n<pre><code class=\"sql\">-- One backup captures ALL data models to a consistent point\nBACKUP DATABASE MultiModelApp\nTO URL = 'https:\/\/storage.blob.core.windows.net\/backups\/MultiModelApp.bak'\nWITH COMPRESSION, ENCRYPTION (\n    ALGORITHM = AES_256,\n    SERVER CERTIFICATE = BackupCert\n);\n\n-- One restore recovers ALL data models to a consistent point\nRESTORE DATABASE MultiModelApp\nFROM URL = 'https:\/\/storage.blob.core.windows.net\/backups\/MultiModelApp.bak'\nWITH STOPAT = '2026-02-01 10:30:00';\n<\/code><\/pre>\n<p>The <code>STOPAT<\/code> clause is point-in-time recovery. The engine replays the transaction log to the specified moment, incorporating all operations &#8211; relational inserts, JSON updates, graph edge creations, vector inserts &#8211; into a single consistent state.<\/p>\n<p>In a polyglot system, point-in-time recovery across five databases requires coordinating five separate restore operations and hoping that the timestamps line up. If your graph database restored to 10:30:00 but your relational database restored to 10:29:58, you have two seconds of inconsistency. For financial data, audit trails, and compliance-sensitive systems, this is not acceptable.<\/p>\n<p>With one database, one transaction log, and one backup, the recovery is atomic.<\/p>\n<h2>3 Ledger Tables: Cryptographic Audit Trails<\/h2>\n<p>For regulated industries &#8211; finance, healthcare, government &#8211; &#8220;who changed what, when&#8221; needs to be tamper-evident. SQL Server&#8217;s ledger tables provide this:<\/p>\n<pre><code class=\"sql\">-- Updatable ledger table: tracks all changes with cryptographic hashes\nCREATE TABLE FinancialTransactions (\n    TransactionID INT PRIMARY KEY,\n    AccountID INT NOT NULL,\n    Amount MONEY NOT NULL,\n    TransactionType NVARCHAR(20),\n    Description NVARCHAR(500),\n    TransactionDate DATETIME2 DEFAULT SYSUTCDATETIME()\n)\nWITH (SYSTEM_VERSIONING = ON, LEDGER = ON);\n<\/code><\/pre>\n<p>Every insert, update, and delete is recorded in a history table with a cryptographic hash chain. The chain ensures that if anyone tampers with historical records &#8211; even a DBA with full access &#8211; the verification check fails.<\/p>\n<pre><code class=\"sql\">-- Verify nobody has tampered with the ledger\nEXECUTE sp_verify_database_ledger_from_digest_storage;\n<\/code><\/pre>\n<p>For append-only audit logs where updates and deletes should be physically impossible:<\/p>\n<pre><code class=\"sql\">CREATE TABLE AuditLog (\n    LogID INT IDENTITY PRIMARY KEY,\n    EventType NVARCHAR(50) NOT NULL,\n    EventData JSON,\n    UserName NVARCHAR(100) DEFAULT SUSER_SNAME(),\n    EventTime DATETIME2 DEFAULT SYSUTCDATETIME()\n)\nWITH (LEDGER = ON (APPEND_ONLY = ON));\n<\/code><\/pre>\n<p>An <code>APPEND_ONLY<\/code> ledger table rejects <code>UPDATE<\/code> and <code>DELETE<\/code> statements at the engine level. Combined with the JSON column for event data, this gives you a schema-flexible, tamper-evident audit trail &#8211; governed by the same RLS policies as everything else.<\/p>\n<h2>4 SQL MCP Server: From Database to Agent in 60 Seconds<\/h2>\n<p>We have established that the database engine can handle multiple data models with unified governance. The remaining question: how do applications and AI agents access it?<\/p>\n<p><a href=\"https:\/\/devblogs.microsoft.com\/azure-sql\/introducing-sql-mcp-server\/\"><strong>SQL MCP Server<\/strong><\/a> is a feature of <strong>Data API Builder (DAB)<\/strong> &#8211; Microsoft&#8217;s open-source engine that generates REST, GraphQL, and MCP endpoints directly from your database schema. No backend code. No ORM. No controller classes. It is the prescriptive approach for exposing enterprise databases to agents &#8211; and it solves problems that other database MCP implementations do not attempt.<\/p>\n<h3>4&#046;1 Installation and Configuration<\/h3>\n<pre><code class=\"bash\"># Install DAB CLI (.NET tool)\ndotnet tool install -g Microsoft.DataApiBuilder\n\n# Initialize a configuration file\ndab init --database-type mssql \\\n         --connection-string \"Server=localhost;Database=MultiModelApp;Trusted_Connection=true\"\n<\/code><\/pre>\n<p>This creates a <code>dab-config.json<\/code> file. Let us add entities:<\/p>\n<pre><code class=\"bash\"># Expose a table as REST + GraphQL + MCP\ndab add Customer \\\n  --source dbo.Customers \\\n  --permissions \"anonymous:read\" \\\n  --rest true \\\n  --graphql true\n\n# Expose a stored procedure (our multi-model query from Part 2)\ndab add GetCustomerContext \\\n  --source dbo.GetCustomerContext \\\n  --source.type \"stored-procedure\" \\\n  --source.params \"customerID:,query:\" \\\n  --permissions \"authenticated:execute\" \\\n  --rest.methods \"get,post\" \\\n  --graphql.operation \"query\"\n<\/code><\/pre>\n<pre><code class=\"bash\"># Start the API server\ndab start\n<\/code><\/pre>\n<p>You now have: &#8211; REST at <code>http:\/\/localhost:5000\/api\/Customer<\/code> &#8211; GraphQL at <code>http:\/\/localhost:5000\/graphql<\/code> &#8211; MCP at <code>http:\/\/localhost:5000\/mcp<\/code> for AI agents<\/p>\n<p>No code generation. No deployment pipeline for a separate API layer. If your table schema changes, DAB picks up the changes on restart.<\/p>\n<h3>4&#046;2 The Configuration File<\/h3>\n<p>Here is what DAB creates under the hood:<\/p>\n<pre><code class=\"json\">{\n  \"data-source\": {\n    \"database-type\": \"mssql\",\n    \"connection-string\": \"@env('SQL_CONNECTION_STRING')\"\n  },\n  \"entities\": {\n    \"Customer\": {\n      \"source\": {\n        \"type\": \"table\",\n        \"object\": \"dbo.Customers\"\n      },\n      \"rest\": { \"enabled\": true },\n      \"graphql\": {\n        \"enabled\": true,\n        \"type\": { \"singular\": \"Customer\", \"plural\": \"Customers\" }\n      },\n      \"permissions\": [\n        { \"role\": \"anonymous\", \"actions\": [\"read\"] }\n      ]\n    },\n    \"GetCustomerContext\": {\n      \"source\": {\n        \"type\": \"stored-procedure\",\n        \"object\": \"dbo.GetCustomerContext\",\n        \"parameters\": {\n          \"customerID\": \"\",\n          \"query\": \"\"\n        }\n      },\n      \"rest\": { \"methods\": [\"GET\", \"POST\"] },\n      \"graphql\": { \"operation\": \"query\" },\n      \"permissions\": [\n        { \"role\": \"authenticated\", \"actions\": [\"execute\"] }\n      ]\n    }\n  }\n}\n<\/code><\/pre>\n<p><strong>First<\/strong>, the connection string uses <code>@env('SQL_CONNECTION_STRING')<\/code> &#8211; it reads from an environment variable, not a plain-text config file. Configuration values can also reference Azure Key Vault secrets. Credentials never sit in a file that might be committed to source control.<\/p>\n<p><strong>Second<\/strong>, the permissions model distinguishes between <code>anonymous<\/code> and <code>authenticated<\/code> roles. The <code>Customer<\/code> entity allows anonymous reads (public catalog data). The <code>GetCustomerContext<\/code> procedure requires authentication. DAB supports Azure AD \/ Entra ID integration, so &#8220;authenticated&#8221; means the caller presented a valid JWT &#8211; which DAB validates before forwarding the request to SQL Server. The database-side RLS policy then filters the results by tenant. Two layers of defense, zero custom middleware.<\/p>\n<h3>4&#046;3 How SQL MCP Server Exposes Your Data to Agents<\/h3>\n<p>SQL MCP Server takes a fundamentally different approach from the database MCP servers you will find on GitHub. Three design decisions matter.<\/p>\n<p><strong>Fixed, small tool set.<\/strong> Regardless of how many tables or procedures your database has, SQL MCP Server exposes exactly seven DML tools: <code>describe_entities<\/code>, <code>create_record<\/code>, <code>read_records<\/code>, <code>update_record<\/code>, <code>delete_record<\/code>, <code>execute_entity<\/code>, and <code>aggregate_records<\/code>. The agent&#8217;s context window is its thinking space. When dozens of tools compete for that space, reasoning quality drops. A fixed tool set keeps the context focused &#8211; the agent thinks first and accesses data second.<\/p>\n<pre><code class=\"json\">\"runtime\": {\n  \"mcp\": {\n    \"enabled\": true,\n    \"path\": \"\/mcp\",\n    \"dml-tools\": {\n      \"describe-entities\": true,\n      \"create-record\": true,\n      \"read-records\": true,\n      \"update-record\": true,\n      \"delete-record\": true,\n      \"execute-entity\": true,\n      \"aggregate-records\": true\n    }\n  }\n}\n<\/code><\/pre>\n<p>You can disable individual tools per deployment. A read-only analytics agent gets <code>read_records<\/code> and <code>aggregate_records<\/code>. A fraud-checker agent gets <code>execute_entity<\/code> to call <code>dbo.FraudCheck<\/code>. The blast-radius scoping from Section 6 extends to the MCP layer.<\/p>\n<p><strong>Entity abstraction &#8211; no raw schema exposure.<\/strong> The agent never sees <code>dbo.Customers<\/code> or <code>dbo.TransactionHistory<\/code>. It sees the entity names you defined in the configuration: <code>Customer<\/code>, <code>GetCustomerContext<\/code>. You can alias column names, hide sensitive fields per role, and add descriptions that guide agent behavior. The internal schema stays internal.<\/p>\n<p><strong>Deterministic query building (NL2DAB, not NL2SQL).<\/strong> SQL MCP Server intentionally does not support NL2SQL. Models are not deterministic, and complex queries are the likeliest to produce subtle errors &#8211; exactly the queries users hope AI can generate. Instead, when an agent calls <code>read_records<\/code> or <code>aggregate_records<\/code>, DAB&#8217;s built-in query builder produces well-formed T-SQL deterministically from the structured tool parameters. Same input, same query, every time. The risk, overhead, and unpredictability of NL2SQL disappear entirely.<\/p>\n<h3>4&#046;4 Semantic Descriptions: Teaching Agents What Your Data Means<\/h3>\n<p>Without descriptions, an agent sees technical names like <code>ProductID<\/code> or <code>dbo.Orders<\/code>. With descriptions, the agent understands that <code>ProductID<\/code> is &#8220;Unique identifier for each product in the catalog&#8221; and the <code>Orders<\/code> entity contains &#8220;Customer purchase orders with line items and shipping details.&#8221; You can add descriptions at every level &#8211; entities, fields, and stored procedure parameters &#8211; using the DAB CLI.<\/p>\n<p>This matters for three reasons: agents find the right entities faster (tool discovery), build better queries with proper context (query accuracy), and return only relevant fields (field selection). In our fraud detection system, describing <code>dbo.FraudCheck<\/code> as &#8220;Real-time fraud assessment combining transaction history, device fingerprint, social graph, vector similarity, and statistical baselines&#8221; tells the agent exactly when to use it.<\/p>\n<h3>4&#046;5 Custom Tools: Stored Procedures as MCP Tools<\/h3>\n<p>For enterprises that want a bespoke MCP surface rather than a generic CRUD interface, SQL MCP Server supports promoting stored procedures as custom tools. You turn off the built-in DML tools and expose only your human-reviewed procedures. Our <code>dbo.FraudCheck<\/code> and <code>dbo.GetCustomerContext<\/code> become the <em>only<\/em> tools the agent can call &#8211; each with typed parameters, descriptions, and role-based permissions. This is the tightest possible contract between an agent and your data.<\/p>\n<h3>4&#046;6 Caching and Monitoring<\/h3>\n<p>SQL MCP Server automatically caches results from <code>read_records<\/code>. Both L1 (in-memory) and L2 (Redis \/ Azure Managed Redis) caching are supported, configurable per entity. For agent workloads that repeatedly query the same reference data &#8211; product catalogs, customer profiles, baseline statistics &#8211; caching reduces database load and prevents request stampedes.<\/p>\n<p>For observability, SQL MCP Server emits logs and telemetry to Azure Log Analytics, Application Insights, and OpenTelemetry endpoints. Combined with the observability patterns in Section 6, you get end-to-end visibility: which agent called which MCP tool, which T-SQL was generated, how long it took, and what the execution plan looked like.<\/p>\n<h2>5 The Multi-Model Stored Procedure<\/h2>\n<p>In <a href=\"https:\/\/devblogs.microsoft.com\/azure-sql\/the-polyglot-tax-part-2\/\">Part 2<\/a> and <a href=\"https:\/\/devblogs.microsoft.com\/azure-sql\/the-polyglot-tax-part-3\/\">Part 3<\/a> we built queries that combined individual data models. Here is a complete stored procedure that an AI agent calls via DAB to get full customer context in a single round-trip:<\/p>\n<pre><code class=\"sql\">CREATE PROCEDURE dbo.GetCustomerContext\n    @customerID INT,\n    @query NVARCHAR(MAX) = NULL\nAS\nBEGIN\n    SELECT \n        -- Relational: Core customer data\n        c.Name,\n        c.Email,\n        c.AccountStatus,\n        c.LifetimeValue,\n\n        -- JSON: Recent interactions (schema-flexible)\n        (\n            SELECT TOP 5 \n                JSON_VALUE(i.Data, '$.channel') AS Channel,\n                JSON_VALUE(i.Data, '$.sentiment') AS Sentiment,\n                i.Timestamp\n            FROM Interactions i\n            WHERE i.CustomerID = @customerID\n            ORDER BY i.Timestamp DESC\n            FOR JSON PATH\n        ) AS RecentInteractions,\n\n        -- Graph: Connected high-value customers\n        (\n            SELECT p2.Name, p2.LifetimeValue\n            FROM Customers c2, Knows k, Customers p2\n            WHERE MATCH(c2-(k)-&gt;p2)\n            AND c2.CustomerID = @customerID\n            AND p2.LifetimeValue &gt; 10000\n            FOR JSON PATH\n        ) AS HighValueConnections,\n\n        -- Vector: Similar past support tickets\n        (\n            SELECT TOP 3\n                t.Resolution,\n                t.SatisfactionScore\n            FROM SupportTickets t\n            WHERE @query IS NOT NULL\n            ORDER BY VECTOR_DISTANCE('cosine', t.Embedding, \n                (SELECT embedding FROM dbo.GetEmbedding(@query)))\n            FOR JSON PATH\n        ) AS SimilarTickets\n\n    FROM Customers c\n    WHERE c.CustomerID = @customerID;\nEND;\n<\/code><\/pre>\n<p>Here is what happens when an agent calls this via SQL MCP Server.<\/p>\n<ol>\n<li>The agent discovers <code>GetCustomerContext<\/code> through the <code>describe_entities<\/code> tool &#8211; the description tells it this procedure returns full customer context<\/li>\n<li>The agent calls <code>execute_entity<\/code> with <code>{\"entity\": \"GetCustomerContext\", \"parameters\": {\"customerID\": 123, \"query\": \"shipping delay\"}}<\/code><\/li>\n<li>SQL MCP Server validates the JWT, extracts the tenant ID, and sets <code>SESSION_CONTEXT<\/code><\/li>\n<li>DAB generates the <code>EXEC dbo.GetCustomerContext<\/code> call deterministically from the typed parameters &#8211; no NL2SQL, no prompt-to-query translation<\/li>\n<li>The engine executes one query plan that touches relational tables, JSON documents, graph edges, and vector indexes<\/li>\n<li>RLS filters all results by tenant &#8211; the agent only sees data its user is authorized to see<\/li>\n<li>The procedure returns a single JSON response with all four data models represented<\/li>\n<li>The agent receives complete context in one round-trip: ~10\u201350ms<\/li>\n<\/ol>\n<p>Compare this to the polyglot workflow from <a href=\"https:\/\/devblogs.microsoft.com\/azure-sql\/the-polyglot-tax\/\">Part 1<\/a>: 5 API calls, 5 auth contexts, 5 failure modes, 200\u2013500ms. The multi-model version is not just faster &#8211; it reduces the agent&#8217;s cognitive burden. Fewer tool calls means less opportunity for the model to hallucinate, lose context, or retry in unexpected ways.<\/p>\n<h2>6 When the Caller is an Agent<\/h2>\n<p>The stored procedures above work. But traditional database design assumes a deterministic, human-reviewed caller. Agents violate that assumption at every layer. They reason their way to queries, retry on failure, and hold connections while they think. The patterns needed to handle this &#8211; statement timeouts, append-only logs, idempotency keys, blast-radius scoping, query tagging &#8211; are not new technology. They are existing patterns elevated from &#8220;nice to have&#8221; to &#8220;load-bearing infrastructure.&#8221;<\/p>\n<h3>Stored procedures as the API surface<\/h3>\n<p>The strongest defense against unpredictable agent queries is to never let agents write arbitrary SQL. The <code>dbo.FraudCheck<\/code> procedure from <a href=\"https:\/\/devblogs.microsoft.com\/azure-sql\/the-polyglot-tax-part-3\/\">Part 3<\/a> <em>is<\/em> the agent&#8217;s API surface. The agent calls <code>execute_entity<\/code> with typed parameters, and SQL MCP Server generates the <code>EXEC dbo.FraudCheck @PersonID = 3, @IncomingAmount = 9200<\/code> deterministically &#8211; not a five-table join the model reasoned into existence. This is the NL2DAB approach in practice: the agent expresses intent through structured tool calls, and DAB&#8217;s query builder produces the SQL. No prompt-to-SQL translation. No hallucinated table names. The &#8220;non-deterministic caller&#8221; problem stops mattering when the callable surface is a set of human-reviewed procedures exposed through a deterministic query builder.<\/p>\n<h3>Resource isolation<\/h3>\n<p>Agent workloads are unpredictable &#8211; a fraud-check burst can spike CPU while customer-facing checkout queries need consistent latency. Resource Governor creates workload groups with CPU, memory, and I\/O ceilings:<\/p>\n<pre><code class=\"sql\">-- Separate agent traffic from human-facing application queries\nCREATE RESOURCE POOL AgentPool WITH (CAP_CPU_PERCENT = 30);\nCREATE WORKLOAD GROUP AgentWorkload\n    USING AgentPool;\n\n-- Classify agent sessions into the workload group\nCREATE FUNCTION dbo.AgentClassifier() RETURNS SYSNAME\nWITH SCHEMABINDING AS\nBEGIN\n    RETURN CASE \n        WHEN SUSER_NAME() LIKE 'agent_%' THEN 'AgentWorkload' \n        ELSE 'default' \n    END;\nEND;\n<\/code><\/pre>\n<p>An agent running fraud checks that consumes excessive CPU hits the pool ceiling. Customer-facing checkout queries continue in their own pool, unaffected. This is enforced by the engine, not by the caller.<\/p>\n<h3>Idempotency and audit<\/h3>\n<p>Agents retry by design. Every orchestration framework uses at-least-once delivery. An agent that gets a timeout on a fraud decision might re-issue the same call. Temporal tables provide automatic history tracking for every row change:<\/p>\n<pre><code class=\"sql\">-- Fraud decisions table with temporal history and idempotency\nCREATE TABLE FraudDecisions (\n    DecisionID INT IDENTITY PRIMARY KEY,\n    TxnID INT NOT NULL,\n    PersonID INT NOT NULL,\n    Decision NVARCHAR(20) NOT NULL,   -- 'APPROVED', 'DENIED', 'REVIEW'\n    DecidedBy NVARCHAR(100) NOT NULL, -- 'agent:fraud-checker-v2' or 'user:analyst-jane'\n    IdempotencyKey NVARCHAR(64) NULL,\n    ValidFrom DATETIME2 GENERATED ALWAYS AS ROW START,\n    ValidTo DATETIME2 GENERATED ALWAYS AS ROW END,\n    PERIOD FOR SYSTEM_TIME (ValidFrom, ValidTo),\n    CONSTRAINT UQ_FraudDecision_Idempotency UNIQUE (IdempotencyKey)\n) WITH (SYSTEM_VERSIONING = ON);\n<\/code><\/pre>\n<p>Every change is recorded in a history table automatically. &#8220;Show me everything the anti-fraud agent changed in the last hour&#8221; is a built-in temporal query. The <code>UNIQUE<\/code> constraint on <code>IdempotencyKey<\/code> handles retries: the second write with the same key gets a constraint violation, and the agent treats it as &#8220;already processed.&#8221;<\/p>\n<h3>Blast radius scoping<\/h3>\n<p>The question is not &#8220;what does this agent need?&#8221; but &#8220;what is the worst case if this agent&#8217;s reasoning goes wrong?&#8221; In the fraud system, this translates directly:<\/p>\n<pre><code class=\"sql\">CREATE ROLE agent_fraud_checker;\nCREATE ROLE agent_analytics;\n\n-- Fraud checker agent: can read everything it needs, can only write decisions\nGRANT EXECUTE ON dbo.FraudCheck TO agent_fraud_checker;\nGRANT INSERT ON FraudDecisions TO agent_fraud_checker;\n-- Cannot modify Person.RiskScore, cannot delete transactions, cannot see PII\n\n-- Analytics agent: read-only against the columnstore\nGRANT SELECT ON TransactionHistory TO agent_analytics;\nGRANT SELECT ON Person TO agent_analytics;\n-- Cannot write anything\n<\/code><\/pre>\n<p>Row-Level Security (from Section 1) extends this further. A policy filters which persons or accounts the agent can see. The agent does not know the filter exists. It calls <code>dbo.FraudCheck<\/code> and gets back only the data it is authorized to see.<\/p>\n<h3>Query observability<\/h3>\n<p>When an agent issues a slow fraud check, you need to know which agent, which task, and which reasoning step produced it. Two layers work together.<\/p>\n<p>At the application layer, SQL MCP Server emits structured telemetry &#8211; including agent identity and request metadata &#8211; to Azure Log Analytics, Application Insights, and OpenTelemetry endpoints. Session context tags set at connection time provide correlation IDs that flow through DAB&#8217;s logging pipeline:<\/p>\n<pre><code class=\"sql\">-- Agent sets context at connection time\nEXEC sp_set_session_context @key = N'agent_id', @value = N'fraud-checker-v2';\nEXEC sp_set_session_context @key = N'task_id', @value = N'txn-review-9200';\n<\/code><\/pre>\n<p>At the database layer, Query Store captures every stored procedure execution with runtime stats and execution plans. When agent workloads funnel through a fixed set of stored procedures (as described above), identifying problematic queries is straightforward &#8211; Query Store tells you which executions were slow and why the optimizer chose that plan.<\/p>\n<h3>The pattern<\/h3>\n<p>None of these are add-ons. Temporal tables replace hand-built audit logs. Resource Governor replaces hand-built connection pool isolation. Row-Level Security replaces hand-built authorization filters. Query Store replaces hand-built query monitoring. The database was designed with these safeguards for human callers. For agentic callers, you activate them rather than invent them.<\/p>\n<h2>7 Reference Architecture: E-Commerce Platform<\/h2>\n<p>Here is what the full architecture looks like in production.<\/p>\n<pre><code>Azure SQL Database (Multi-Model Core)\n\u251c\u2500\u2500 Relational: Customers, Orders, Inventory, Payments\n\u251c\u2500\u2500 JSON: Product specs, event logs, user preferences, feature flags\n\u251c\u2500\u2500 Graph: Customer networks, fraud rings, product relationships\n\u251c\u2500\u2500 Vector (DiskANN): Product search, support ticket similarity, recommendations\n\u2514\u2500\u2500 Columnstore: Sales analytics, inventory forecasting, customer segmentation\n\nData API Builder\n\u251c\u2500\u2500 REST: Mobile\/web apps, partner integrations\n\u251c\u2500\u2500 GraphQL: Frontend teams, flexible queries\n\u2514\u2500\u2500 MCP: AI agents (Claude, GPT, custom)\n\nGovernance (applied uniformly)\n\u251c\u2500\u2500 Row-Level Security: Tenant isolation across all models\n\u251c\u2500\u2500 Dynamic Data Masking: PII protection in API responses\n\u251c\u2500\u2500 Always Encrypted: Client-side encryption for sensitive columns\n\u251c\u2500\u2500 Ledger Tables: Tamper-evident audit trail\n\u251c\u2500\u2500 TDE: Encryption at rest\n\u2514\u2500\u2500 Unified Audit: Single log for all data access\n<\/code><\/pre>\n<h3>Recommendation Engine<\/h3>\n<p>Here is a product recommendation procedure that uses all five data models:<\/p>\n<pre><code class=\"sql\">CREATE PROCEDURE dbo.GetProductRecommendations\n    @customerID INT,\n    @limit INT = 10\nAS\nBEGIN\n    WITH CustomerProfile AS (\n        -- Graph: What do customers in this person's network buy?\n        SELECT \n            oi.ProductID,\n            COUNT(*) * 0.3 AS GraphScore\n        FROM Customers c1, Similar s, Customers c2\n        JOIN Orders o ON c2.CustomerID = o.CustomerID\n        JOIN OrderItems oi ON o.OrderID = oi.OrderID\n        WHERE MATCH(c1-(s)-&gt;c2)\n        AND c1.CustomerID = @customerID\n        GROUP BY oi.ProductID\n    ),\n    VectorSimilarity AS (\n        -- Vector: Products semantically similar to recent purchases\n        SELECT \n            p.ProductID,\n            MIN(VECTOR_DISTANCE('cosine', p.Embedding, recent.Embedding)) * 0.4 AS VectorScore\n        FROM Products p\n        CROSS JOIN (\n            SELECT TOP 3 pr.Embedding \n            FROM OrderItems oi \n            JOIN Orders o ON oi.OrderID = o.OrderID\n            JOIN Products pr ON oi.ProductID = pr.ProductID\n            WHERE o.CustomerID = @customerID\n            ORDER BY o.OrderDate DESC\n        ) recent\n        GROUP BY p.ProductID\n    ),\n    TrendingProducts AS (\n        -- Columnstore\/Analytics: What is trending this week?\n        SELECT \n            ProductID,\n            PERCENT_RANK() OVER (ORDER BY SUM(Quantity)) * 0.2 AS TrendScore\n        FROM SalesHistory\n        WHERE SaleDate &gt; DATEADD(DAY, -7, GETUTCDATE())\n        GROUP BY ProductID\n    )\n    SELECT TOP (@limit)\n        p.ProductID,\n        p.Name,\n        JSON_VALUE(p.Specs, '$.shortDescription') AS Description,  -- JSON\n        p.Price,\n        COALESCE(cp.GraphScore, 0) +\n        COALESCE(vs.VectorScore, 0) +\n        COALESCE(tp.TrendScore, 0) AS TotalScore\n    FROM Products p\n    LEFT JOIN CustomerProfile cp ON p.ProductID = cp.ProductID\n    LEFT JOIN VectorSimilarity vs ON p.ProductID = vs.ProductID\n    LEFT JOIN TrendingProducts tp ON p.ProductID = tp.ProductID\n    WHERE p.InStock = 1\n    ORDER BY TotalScore DESC;\nEND;\n<\/code><\/pre>\n<p>One procedure. Five data models. One execution plan. One transaction. One security boundary. One backup captures the entire state.<\/p>\n<h2>8 Getting Started<\/h2>\n<p>Everything in this series runs on free-tier infrastructure.<\/p>\n<h3>Option 1: SQL Server 2025 (On-Premises, Free)<\/h3>\n<p><a href=\"https:\/\/www.microsoft.com\/sql-server\/sql-server-downloads\">SQL Server 2025 Express<\/a> includes all multi-model features: native JSON type, graph tables, vector search, columnstore indexes. Up to 10 GB per database. No license cost.<\/p>\n<pre><code class=\"powershell\"># Install via winget\nwinget install Microsoft.SQLServer.2025.Express\n<\/code><\/pre>\n<h3>Option 2: Azure SQL Database (Cloud, Free Tier)<\/h3>\n<p><a href=\"https:\/\/learn.microsoft.com\/azure\/azure-sql\/database\/free-offer\">Azure SQL Database Free Offer<\/a> &#8211; 100,000 vCore-seconds per month. Full cloud-managed experience with automatic backups and high availability.<\/p>\n<h3>Data API Builder (Free, Open Source)<\/h3>\n<pre><code class=\"bash\">dotnet tool install -g Microsoft.DataApiBuilder\ndab init --database-type mssql --connection-string \"YOUR_CONNECTION_STRING\"\ndab add Customer --source dbo.Customers --permissions \"anonymous:read\"\ndab start\n<\/code><\/pre>\n<p>Sixty seconds from install to a running API with REST, GraphQL, and MCP endpoints.<\/p>\n<table>\n<thead>\n<tr>\n<th>Resource<\/th>\n<th>Link<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>SQL Server Downloads<\/td>\n<td><a href=\"https:\/\/www.microsoft.com\/sql-server\/sql-server-downloads\">microsoft.com\/sql-server<\/a><\/td>\n<\/tr>\n<tr>\n<td>Azure SQL Free Tier<\/td>\n<td><a href=\"https:\/\/learn.microsoft.com\/azure\/azure-sql\/database\/free-offer\">https:\/\/aka.ms\/sqlfreeoffer<\/a><\/td>\n<\/tr>\n<tr>\n<td>Data API Builder<\/td>\n<td><a href=\"https:\/\/github.com\/Azure\/data-api-builder\">github.com\/Azure\/data-api-builder<\/a><\/td>\n<\/tr>\n<tr>\n<td>DAB Documentation<\/td>\n<td><a href=\"https:\/\/learn.microsoft.com\/azure\/data-api-builder\/\">learn.microsoft.com<\/a><\/td>\n<\/tr>\n<tr>\n<td>SQL MCP Server<\/td>\n<td><a href=\"https:\/\/aka.ms\/sql\/mcp\">aka.ms\/sql\/mcp<\/a><\/td>\n<\/tr>\n<tr>\n<td>SQL Server Samples<\/td>\n<td><a href=\"https:\/\/github.com\/microsoft\/sql-server-samples\">github.com\/microsoft\/sql-server-samples<\/a><\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h2>Series Summary<\/h2>\n<p>Across four posts, we traced a single fraud detection scenario from architectural problem to production-ready solution:<\/p>\n<ul>\n<li><strong><a href=\"https:\/\/devblogs.microsoft.com\/azure-sql\/the-polyglot-tax\/\">Part 1: The Polyglot Tax<\/a><\/strong> &#8211; Five databases, five auth systems, five backup schedules, five things that break at 2 AM. We measured the compounding costs of polyglot persistence: network latency, saga patterns for consistency, exploding security surface area, operational overhead, and developer cognitive load. The tax is not any single cost &#8211; it is all of them multiplied together.<\/li>\n<li><strong><a href=\"https:\/\/devblogs.microsoft.com\/azure-sql\/the-polyglot-tax-part-2\/\">Part 2: When JSON Met Graph<\/a><\/strong> &#8211; Native <code>JSON<\/code> type with pre-parsed binary storage and indexed paths replaced <code>NVARCHAR(MAX)<\/code> string parsing. Graph <code>MATCH<\/code> syntax compiled to the same join infrastructure the optimizer has refined for three decades. One query combined relational joins, JSON path extraction, and graph traversal in a single execution plan. Two databases eliminated.<\/li>\n<li><strong><a href=\"https:\/\/devblogs.microsoft.com\/azure-sql\/the-polyglot-tax-part-3\/\">Part 3: Vectors, Analytics, and the End of ETL<\/a><\/strong> &#8211; DiskANN delivered billion-scale vector search on commodity storage. The pre-filter pattern &#8211; where relational predicates narrow 10 million rows to 3,000 before any distance computation &#8211; is an optimization that only works when vectors and relational metadata share the same engine. Columnstore achieved 60x analytical speedups through compression, column elimination, batch mode, and segment elimination. Two more databases eliminated.<\/li>\n<li><strong>Part 4<\/strong> (this post) &#8211; Row-level security applied uniformly across all five data models. Point-in-time backup and recovery as a single atomic operation. <a href=\"https:\/\/devblogs.microsoft.com\/azure-sql\/introducing-sql-mcp-server\/\">SQL MCP Server<\/a> generating REST, GraphQL, and MCP endpoints without custom code &#8211; with entity abstraction, deterministic query building, and a fixed tool set that keeps the agent&#8217;s context window focused. Agent-hardening patterns &#8211; stored procedures as API surfaces, resource isolation, idempotency, blast-radius scoping &#8211; built from primitives the engine already has.<\/li>\n<\/ul>\n<h2>So What?<\/h2>\n<p>In <a href=\"https:\/\/devblogs.microsoft.com\/azure-sql\/the-polyglot-tax\/\">Part 1<\/a>, you were building fast. An agent, a database, a working app by lunch. Then the complexity arrived &#8211; JSON schemas that evolve weekly, graph traversals for fraud detection, semantic search for product discovery, real-time dashboards over millions of rows &#8211; and the conventional answer was to spin up a new database for each one. Five databases. Five auth systems. Five backup schedules. Five things that can break while you sleep.<\/p>\n<p>Across the next three posts, we dismantled that assumption:<\/p>\n<ul>\n<li><strong><a href=\"https:\/\/devblogs.microsoft.com\/azure-sql\/the-polyglot-tax-part-2\/\">Part 2<\/a><\/strong>: JSON and graph live inside the same engine, compiled into the same execution plans, optimized by the same query planner. Two databases eliminated.<\/li>\n<li><strong><a href=\"https:\/\/devblogs.microsoft.com\/azure-sql\/the-polyglot-tax-part-3\/\">Part 3<\/a><\/strong>: DiskANN provides web-scale vector search with hybrid filtering that external vector databases cannot replicate. Columnstore delivers 60x analytical speedups through physics, not tuning. Two more databases eliminated.<\/li>\n<li><strong>Part 4<\/strong> (this post): One row-level security policy across all five data models. One backup to a consistent point in time. <a href=\"https:\/\/devblogs.microsoft.com\/azure-sql\/introducing-sql-mcp-server\/\">SQL MCP Server<\/a> generating REST, GraphQL, and MCP endpoints with entity abstraction, deterministic query building, and a fixed seven-tool surface that keeps agents focused. Agent-hardening patterns that use the engine&#8217;s existing primitives &#8211; not new inventions.<\/li>\n<\/ul>\n<p>Five databases consolidated to one. Five auth systems to one. Five backup schedules to one.<\/p>\n<p>For the agentic world, this is not an incremental improvement &#8211; it is a structural one. Your agent makes one call, gets one response, through one auth context, in under 50ms. No orchestrating between services. No hoping that five databases all return consistent data. No writing fallback logic because one of them is slow.<\/p>\n<p>The case is not that SQL Server is the only database you will ever need. There are workloads &#8211; petabyte-scale OLAP, real-time streaming with sub-millisecond latency, specialized time-series at extreme ingest rates &#8211; where purpose-built systems justify their operational overhead.<\/p>\n<p>The case is this: <strong>the database should never be the reason you slow down.<\/strong> Your agent is fast. Your iteration cycle is fast. Your architecture should keep up. For the majority of application workloads, a multi-model engine removes the tax entirely &#8211; one database, one optimizer, one security model, one backup, one API.<\/p>\n<p>The math from <a href=\"https:\/\/devblogs.microsoft.com\/azure-sql\/the-polyglot-tax\/\">Part 1<\/a> still holds: 1 \u00d7 1 \u00d7 1 = 1, not 5 \u00d7 5 \u00d7 5 = 125.<\/p>\n<p>Start with the free tier. Run the scripts. Point an agent at it. Build fast &#8211; and keep building fast.<\/p>\n<h2>The Full Series<\/h2>\n<ul>\n<li><a href=\"https:\/\/devblogs.microsoft.com\/azure-sql\/the-polyglot-tax\/\">Part 1: The Polyglot Tax<\/a> &#8211; The cost of running five databases for one application<\/li>\n<li><a href=\"https:\/\/devblogs.microsoft.com\/azure-sql\/the-polyglot-tax-part-2\/\">Part 2: When JSON Met Graph<\/a> &#8211; Native JSON, graph MATCH, and cross-model execution plans<\/li>\n<li><a href=\"https:\/\/devblogs.microsoft.com\/azure-sql\/the-polyglot-tax-part-3\/\">Part 3: Vectors, Analytics, and the End of ETL<\/a> &#8211; DiskANN vector search, columnstore analytics, and the five-model fraud check<\/li>\n<li>Part 4: The Agent-Ready Database (this post) &#8211; Security, backup, MCP, Data API Builder, and agent-ready architecture<\/li>\n<\/ul>\n<hr \/>\n<p><em>Further Reading:<\/em> &#8211; <a href=\"https:\/\/learn.microsoft.com\/azure\/azure-sql\/\">Azure SQL Database Documentation<\/a> &#8211; <a href=\"https:\/\/learn.microsoft.com\/sql\/relational-databases\/json\/json-data-sql-server\">JSON in SQL Server<\/a> &#8211; <a href=\"https:\/\/learn.microsoft.com\/sql\/relational-databases\/graphs\/sql-graph-overview\">Graph Processing in SQL Server<\/a> &#8211; <a href=\"https:\/\/learn.microsoft.com\/sql\/sql-server\/ai\/vectors\">Vector Search in SQL Server<\/a> &#8211; <a href=\"https:\/\/learn.microsoft.com\/sql\/relational-databases\/indexes\/columnstore-indexes-overview\">Columnstore Indexes<\/a> &#8211; <a href=\"https:\/\/learn.microsoft.com\/azure\/data-api-builder\/\">Data API Builder<\/a> &#8211; <a href=\"https:\/\/devblogs.microsoft.com\/azure-sql\/introducing-sql-mcp-server\/\">SQL MCP Server<\/a> &#8211; <a href=\"https:\/\/learn.microsoft.com\/sql\/relational-databases\/security\/row-level-security\">Row-Level Security<\/a> &#8211; <a href=\"https:\/\/learn.microsoft.com\/sql\/relational-databases\/security\/ledger\/ledger-overview\">Ledger Tables<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>The Agent-Ready Database: Security, Backup, and MCP Part 4 of 4 \u2013 The Multi-Model Database Series This is the final post in a four-part series on multi-model databases in SQL Server 2025 and Azure SQL &#8211; exploring how the optimizer, storage engine, and security layer treat each data model as a first-class citizen under one [&hellip;]<\/p>\n","protected":false},"author":147113,"featured_media":81,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[1,594,668,710,720,672,619],"tags":[],"class_list":["post-6928","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-azure-sql","category-hyperscale","category-sql-database-in-fabric","category-sql-mcp-server","category-sql-mcp-server-2","category-sql-server-2025","category-t-sql"],"acf":[],"blog_post_summary":"<p>The Agent-Ready Database: Security, Backup, and MCP Part 4 of 4 \u2013 The Multi-Model Database Series This is the final post in a four-part series on multi-model databases in SQL Server 2025 and Azure SQL &#8211; exploring how the optimizer, storage engine, and security layer treat each data model as a first-class citizen under one [&hellip;]<\/p>\n","_links":{"self":[{"href":"https:\/\/devblogs.microsoft.com\/azure-sql\/wp-json\/wp\/v2\/posts\/6928","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/devblogs.microsoft.com\/azure-sql\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/devblogs.microsoft.com\/azure-sql\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/azure-sql\/wp-json\/wp\/v2\/users\/147113"}],"replies":[{"embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/azure-sql\/wp-json\/wp\/v2\/comments?post=6928"}],"version-history":[{"count":1,"href":"https:\/\/devblogs.microsoft.com\/azure-sql\/wp-json\/wp\/v2\/posts\/6928\/revisions"}],"predecessor-version":[{"id":7015,"href":"https:\/\/devblogs.microsoft.com\/azure-sql\/wp-json\/wp\/v2\/posts\/6928\/revisions\/7015"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/azure-sql\/wp-json\/wp\/v2\/media\/81"}],"wp:attachment":[{"href":"https:\/\/devblogs.microsoft.com\/azure-sql\/wp-json\/wp\/v2\/media?parent=6928"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/azure-sql\/wp-json\/wp\/v2\/categories?post=6928"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/azure-sql\/wp-json\/wp\/v2\/tags?post=6928"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}