{"id":16635,"date":"2026-05-07T00:00:00","date_gmt":"2026-05-07T07:00:00","guid":{"rendered":"https:\/\/devblogs.microsoft.com\/ise\/?p=16635"},"modified":"2026-05-07T02:49:45","modified_gmt":"2026-05-07T09:49:45","slug":"llm-sql-query-generation","status":"publish","type":"post","link":"https:\/\/devblogs.microsoft.com\/ise\/llm-sql-query-generation\/","title":{"rendered":"SQL query generation from natural language"},"content":{"rendered":"<h2>Introduction<\/h2>\n<p>Converting natural language questions to SQL is valuable but difficult. At first\nglance, given a question and a database schema, generating correct SQL seems\nstraightforward. In reality, it is complex. LLMs must understand table\nrelationships, disambiguate terminology, apply correct joins, and handle\nreal-world database messiness: obscure table and column names, unnormalized\ndata, and key data buried in JSON columns. To face these challenges, you need a\nsystem capable of performing interactive data discovery and reasoning over\nintermediate results. This system must then create a SQL query that retrieves\nthe data needed to satisfy the user&#8217;s question.<\/p>\n<p>This post describes our approach to building a system which translates natural\nlanguage (NL) user questions to SQL queries. We evaluated multiple AI agent\napproaches (Azure Databricks AI\/BI Genie, custom implementations via GitHub\nCopilot CLI and Microsoft Agent Framework) using an evaluation framework derived\nfrom <a href=\"https:\/\/github.com\/bird-bench\/livesqlbench\">LiveSQLBench<\/a> to measure\nprogress. After performing various experiments, we achieved an accuracy of\napproximately 75% with our custom implementations. In this post, we share\npractical insights and lessons for building similar systems.<\/p>\n<p>Given the database messiness described above, our research focused specifically\non exploring unknown or poorly documented databases\u2014a scenario where simple\nschema-based approaches fail and adaptive reasoning becomes essential. This\ncontrasts with the well-studied case of well-annotated schemas, where existing\nNL-to-SQL solutions already perform effectively; our work targets the harder\nproblem where databases are unfamiliar and metadata is sparse or missing.<\/p>\n<h2>Research Foundation<\/h2>\n<p>Before building anything, we conducted a literature review to gain inspiration\nfrom existing approaches. We found the paper <a href=\"https:\/\/arxiv.org\/html\/2506.01273v1\">RAISE: Reasoning Agent for\nInteractive SQL Exploration<\/a>, which\ndescribes an agent that answers a user&#8217;s query prompted in natural language by\nnot only looking at the database schema, but also reasoning more deeply and\nsemantically by exploring the actual data found in the tables.<\/p>\n<p>This paper guided our exploration decisions and helped us avoid undesirable\nbehaviors such as answering questions prematurely without sufficiently\nunderstanding the available data and getting stuck in unproductive reasoning\nloops.<\/p>\n<h2>Dataset<\/h2>\n<p>Following the research, we searched for an existing dataset that could\ndemonstrate real-world database complexity. We used the <a href=\"https:\/\/github.com\/bird-bench\/livesqlbench\">LiveSQLBench\ndataset<\/a>, a benchmark designed\nexactly for the purpose of real-world NL-to-SQL tasks. We focused on\nmedium-complexity <code>SELECT<\/code> queries (2-3 table joins, multiple <code>WHERE<\/code>\nconditions) on databases with 10+ tables. The queries featured the real-world\nmessiness we wanted to tackle:<\/p>\n<ul>\n<li>Obscure table and column names, and unclear or undocumented relationships (for\nexample, missing foreign keys and implicit joins)<\/li>\n<li>JSON columns storing semi-structured data<\/li>\n<li>Issues due to denormalized data<\/li>\n<li>Multiple valid interpretations of ambiguous questions<\/li>\n<\/ul>\n<h2>Approach and Solution<\/h2>\n<p>We tested three different approaches, progressing from rapid prototyping to\nusing production-grade frameworks. This allowed us to study how existing agentic\ntools performed before building our own custom implementations, while\nmaintaining consistent evaluation criteria. All solutions were evaluated against\na common benchmark described below.<\/p>\n<p>We progressed through the following experiments:<\/p>\n<ol>\n<li><strong>GitHub Copilot CLI<\/strong>: Quick validation of the RAISE approach with access to\nmultiple model providers<\/li>\n<li><strong>Microsoft Agent Framework<\/strong>: Refined implementation using Microsoft&#8217;s main\nagent framework<\/li>\n<li><strong>AI\/BI Genie<\/strong>: Baseline comparison against AI\/BI Genie, a feature of Azure\nDatabricks that lets business users ask questions about their data in natural\nlanguage<\/li>\n<\/ol>\n<h3>Logical Flow<\/h3>\n<p>The diagram below shows a simplified agentic flow that the three solutions\nfollowed.<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/devblogs.microsoft.com\/ise\/wp-content\/uploads\/sites\/55\/2026\/05\/agent-flow-diagram-scaled.webp\" alt=\"agent-flow-diagram\" \/><\/p>\n<h3>GitHub Copilot CLI<\/h3>\n<p>We built an initial proof of concept following RAISE paper principles to\nvalidate if a single agent with appropriate tooling could successfully generate\nthe expected SQL queries. This approach enabled us to explore LLM models (like\nGemini 3.0 and Claude Sonnet 4.5) not yet available on Microsoft Foundry at time\nof writing.<\/p>\n<p>The agent had access to four core tools:<\/p>\n<ul>\n<li><code>get_db_schema()<\/code>: Returns schema for all tables (column names, data types,\nconstraints, foreign keys)<\/li>\n<li><code>get_tb_table_schema(name)<\/code>: Returns detailed schema for a specific table\n(columns, types, constraints, foreign keys)<\/li>\n<li><code>get_db_table_list()<\/code>: Lists all table names<\/li>\n<li><code>query_db(sql)<\/code>: Executes SQL query and returns the resulting data<\/li>\n<\/ul>\n<h3>Microsoft Agent Framework<\/h3>\n<p>We developed a custom implementation expanding on what we learned with the\nGitHub Copilot CLI, using the same tool set but within Microsoft Foundry. We\nkept the design intentionally simple by using a straightforward agentic loop\nrather than orchestrating sub-agents and other advanced features of the\nframework. We found this granularity was sufficient for strong performance.<\/p>\n<h3>Azure Databricks AI\/BI Genie<\/h3>\n<p>We evaluated AI\/BI Genie, an existing enterprise solution from Azure Databricks&#8217;\necosystem that lets users interact with data using natural language, leveraging\ngenerative AI and Unity Catalog metadata. We conducted a series of experiments\nto understand how metadata enrichment affects accuracy in production tools.<\/p>\n<p>We progressed through multiple configurations: starting with empty Genie Spaces,\nthen adding column descriptions, populating the Knowledge Store with domain\nspecific instructions, and finally using Genie&#8217;s feedback mechanism to\niteratively save and reuse correct SQL patterns. See <a href=\"https:\/\/learn.microsoft.com\/en-us\/azure\/databricks\/genie\/\">AI\/BI Genie\ndocumentation<\/a> for\nmore details.<\/p>\n<p>Genie performed poorly when little or no metadata\/annotations were available,\nbut became strong once we added column descriptions and domain instructions.\nBecause curating detailed metadata was not the focus of our investigation, we\nstopped the Genie track after capturing these results.<\/p>\n<h2>Evaluation Methodology<\/h2>\n<p>We needed to track performance consistently across solutions, so we standardized\nthe evaluation process and metrics used by every experiment run.<\/p>\n<h3>Evaluation<\/h3>\n<p>For each query in the dataset, we performed the following:<\/p>\n<ol>\n<li><strong>Generated Ground Truth<\/strong>: Executed the canonical SQL query against the\ndatabase and captured the exact results.<\/li>\n<li><strong>Prompted the agent<\/strong>: Each system under test received only the natural\nlanguage question.<\/li>\n<li><strong>Captured interim agent responses and function calls<\/strong>: Logged every message\nand tool invocation to enable step-by-step tracing, error diagnosis, and\nmetric attribution.<\/li>\n<li><strong>Captured the Prediction<\/strong>: The agent generated a SQL query and we executed\nit to get the results from the prediction.<\/li>\n<li><strong>Compared Results<\/strong>: We measured success using a flexible matching approach:\n<ul>\n<li><strong>Floating-point tolerance<\/strong>: Numerical values were compared with a\nconfigurable tolerance value. This handles floating-point precision issues\nwhile keeping results accurate.<\/li>\n<li><strong>Case-insensitive strings<\/strong>: Text values were compared case-insensitively\nto handle differences in casing.<\/li>\n<li><strong>Row-level matching<\/strong>: Bipartite matching ensured each ground truth row\nuniquely matched a predicted row, correctly handling duplicate values.<\/li>\n<\/ul>\n<\/li>\n<\/ol>\n<p>The evaluation approach was inspired by the\n<a href=\"https:\/\/github.com\/bird-bench\/livesqlbench\">LiveSQLBench<\/a> system. With the goal\nto recognize that SQL results can differ for valid reasons (precision, column\nnaming conventions, value ordering) while remaining semantically equivalent.<\/p>\n<p>Each run produced the necessary results and log files to allow for diagnosing\nunexpected performance. This standardized testing and detailed logging enabled\nus to compare approaches and improve solutions iteratively.<\/p>\n<h4>Example<\/h4>\n<p><strong>1. Question<\/strong> &#8211; <em>Which modern-style homes (like brickwork houses or\napartments) in the Guar\u00e1 area also have TV service? List their household numbers\nin order<\/em><\/p>\n<p><strong>2. Agent prompt<\/strong> &#8211; instructions and hints for domain knowledge (business\ncontext like what fields mean, how data relates, and calculation formulas)\n(snippet below):<\/p>\n<pre><code class=\"language-text\">...Your task is to generate a correct PostgreSQL query to answer the user's [Question]. You may be given a [Hint] that will help you solve the question...\r\n\r\n[Hint]:\r\n- Income Classification (value_illustration):\r\n  Illustrates the income brackets for household economic status.\r\n  Ranges from 'Low Income' to 'Very High Income'. Null indicates undisclosed or irregular income.<\/code><\/pre>\n<p><strong>3. Agent reasoning<\/strong> &#8211; We captured interim messages and function calls in a\nlog file:<\/p>\n<pre><code class=\"language-jsonl\">{\"type\": \"call\", \"function\": \"get_db_table_list\", \"arguments\": {}}\r\n{\"type\": \"return\", \"result\": \"amenities\\nhouseholds\\n...\"}\r\n{\"type\": \"call\", \"function\": \"get_tb_table_schema\", \"arguments\": {\"name\": \"households\"}}<\/code><\/pre>\n<p><strong>4. Predicted SQL<\/strong> &#8211; The final output given by the agent:<\/p>\n<pre><code class=\"language-sql\">SELECT DISTINCT households.housenum\r\nFROM households\r\nJOIN properties ON properties.houselink = households.housenum\r\nJOIN amenities ON amenities.houseid = households.housenum\r\nWHERE TRIM(UPPER(households.locregion)) = 'GUAR\u00c1'\r\n  AND TRIM(UPPER(properties.dwelling_specs-&gt;&gt;'Dwelling_Class')) IN ('BRICKWORK HOUSE','APARTMENT','CONDOMINIUM')\r\n  AND TRIM(UPPER(amenities.cablestatus)) IN ('AVAIL','AVAILABLE','YES')\r\nORDER BY households.housenum;<\/code><\/pre>\n<p><strong>5. Results<\/strong> &#8211; The predicted SQL is then executed against the database and\ncompared with the ground truth.<\/p>\n<pre><code class=\"language-json\">{\"housenum\": \"35\"}, {\"housenum\": \"234\"}, ...<\/code><\/pre>\n<h2>Experiments<\/h2>\n<p>We systematically evaluated what factors improve or hinder NL-to-SQL accuracy\nacross three approaches. The experiments revealed clear patterns in system\nperformance and identified both solvable technical challenges and fundamental\nlimitations.<\/p>\n<h3>Experimental Journey<\/h3>\n<p>Our experiments followed an iterative approach focused on understanding how\nmetadata and validation mechanisms improve query generation. We tested each\nchange against our standardized evaluation dataset, allowing us to isolate the\nimpact of individual improvements. The results below show this progression.<\/p>\n<p>For clarity in the tables below: <strong>Metadata<\/strong> refers to technical schema context\n(table and column descriptions, types, relationships), and <strong>domain hints<\/strong> are\nbusiness domain instructions we inject into prompts or a knowledge store to\nexplain what fields mean and how metrics should be computed.<\/p>\n<h3>Experiment Overview<\/h3>\n<h4>GitHub Copilot CLI<\/h4>\n<table>\n<thead>\n<tr>\n<th>Model<\/th>\n<th>Experiment Description<\/th>\n<th>Accuracy<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>Claude Sonnet 4.5<\/td>\n<td>Metadata only<\/td>\n<td>66.70%<\/td>\n<\/tr>\n<tr>\n<td>Claude Sonnet 4.5<\/td>\n<td>Metadata and domain hints<\/td>\n<td>80.77%<\/td>\n<\/tr>\n<tr>\n<td>Gemini 3.0 Pro Preview<\/td>\n<td>Metadata and domain hints<\/td>\n<td>74.07%<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h4>Microsoft Agent Framework<\/h4>\n<table>\n<thead>\n<tr>\n<th>Model<\/th>\n<th>Experiment Description<\/th>\n<th>Accuracy<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>GPT-5 Mini<\/td>\n<td>Metadata only<\/td>\n<td>55.60%<\/td>\n<\/tr>\n<tr>\n<td>GPT-5 Mini<\/td>\n<td>Metadata and domain hints<\/td>\n<td>65.38%<\/td>\n<\/tr>\n<tr>\n<td>GPT-5 Mini<\/td>\n<td>Metadata and additional clarification agent tool providing domain knowledge<\/td>\n<td>69.23%<\/td>\n<\/tr>\n<tr>\n<td>GPT-5 Mini<\/td>\n<td>Metadata and domain hints with revisions to agent instructions<\/td>\n<td>76.92%<\/td>\n<\/tr>\n<tr>\n<td>GPT-5 Mini<\/td>\n<td>Ablation study &#8211; removing ability to query data<\/td>\n<td>38.46%<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h4>Azure Databricks AI\/BI Genie<\/h4>\n<table>\n<thead>\n<tr>\n<th>Experiment Description<\/th>\n<th>Accuracy<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>Empty Spaces &#8211; no metadata or knowledge store<\/td>\n<td>9.50%<\/td>\n<\/tr>\n<tr>\n<td>Added column descriptions plus system instructions with domain knowledge<\/td>\n<td>69.23%<\/td>\n<\/tr>\n<tr>\n<td>Feedback mechanism &#8211; storing SQL patterns and joins from 3 feedback iterations<\/td>\n<td>88.50%<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h2>Findings<\/h2>\n<h3>Schema Knowledge + Live Validation<\/h3>\n<p>Field-level metadata (technical schema details like column types and\nconstraints) and documentation (semantic understanding of what fields mean) are\nboth critical to successful query generation. Runtime database access allows the\nagent to test and validate its SQL before producing the final result.<\/p>\n<p>Giving the agent better schema\/context lifted accuracy across all systems, and\nletting it actually run queries to sanity-check its SQL was essential. When we\nremoved runtime querying, accuracy collapsed to 38.46%, so metadata alone was\nnot enough\u2014execution feedback mattered.<\/p>\n<p>The gain from having metadata available was substantial: AI\/BI Genie improved\nfrom 9.50% to 69.23%; Copilot CLI (Claude Sonnet 4.5) improved from 66.70% to\n80.77%; and the Agent Framework (GPT-5) improved from 55.60% to 69.23%.<\/p>\n<h3>Model Selection<\/h3>\n<p>Claude Sonnet 4.5 reached 80.77% accuracy vs. GPT-5 Mini at 69.23% on identical\ndatasets, prompts, and evaluation criteria in these final experiments. Our\ndefault and baseline model was GPT-5 Mini (we developed prompts there first);\nClaude generally performed better across many runs and produced more\ncomprehensive answers with the same prompt. This reflects model behavior on this\nsetup, not a universal ranking. Gemini 3.0 Pro Preview scored 74.07% on the same\nprompt and meta data. It showed strengths in whitespace\/temporal normalization\nand even surpassed ground truth quality in a few cases, but had gaps in\njoin-path coverage and JSON field selection. Models have distinct strengths and\nweaknesses, making it important to evaluate multiple options based on specific\nschema characteristics, latency\/cost constraints, and error tolerance\nrequirements.<\/p>\n<h3>Iterative Refinement and Feedback Loops<\/h3>\n<p>Beyond initial prompt engineering and metadata setup, iterative feedback\nmechanisms unlock significant additional improvements. In our AI\/BI Genie\nevaluation, we found that storing corrected SQL patterns and learned joins from\nuser feedback improved accuracy by 19.27 percentage points (69.23% to 88.50%).\nThis progression demonstrates that NL-to-SQL systems benefit from production\nfeedback loops where successful query patterns are captured and reused,\nparticularly for repeated query types and domain-specific patterns. This\nsuggests that production systems should consider mechanisms to identify common\nqueries, validate results, and feed corrections back into the knowledge base for\ncontinuous improvement.<\/p>\n<p>Genie&#8217;s feedback loop captures and reuses corrected queries, making it strong\nfor optimizing known or repeated queries, but applying those improvements is\nlargely manual; it lacks automation for end-to-end evaluation or broad data\nexploration, so it is less suited to hands-free exploration.<\/p>\n<h3>Business Logic Limitations<\/h3>\n<p>The majority of the remaining failures across all systems were business logic\nerrors: queries that executed successfully but returned incorrect results due to\nsemantic misunderstanding. This represents a fundamental challenge distinct from\ntechnical SQL generation.<\/p>\n<p>Examples include:<\/p>\n<ul>\n<li>Incomplete cost aggregations (5 of 8 required fields, 46.00% underestimation).<\/li>\n<li>Incorrect metric calculations (recomputing values rather than using\npre-calculated columns, which risked misalignment in calculation logic, units,\nor decimal precision).<\/li>\n<li>Ambiguous interpretations (pattern matching &#8220;ultra-low temperature&#8221; vs.\nspecific &#8220;-70\u00b0C&#8221; value).<\/li>\n<\/ul>\n<p>We found that metadata and prompt optimization did not resolve business logic\nerrors. These require domain expertise, comprehensive test coverage, and\niterative refinement. We learned to encode formulas and constraints explicitly\nthrough prompt engineering or knowledge bases, rather than expecting models to\ninfer them.<\/p>\n<h3>Evaluation Strategy<\/h3>\n<p>While our flexible matching approach handled numeric precision and row-level\nvariations, structural differences remained challenging. Queries returning\ndifferent numbers of columns, whitespace normalization, and semantic aliases\ncould make functionally equivalent results appear different. Adjusting\nevaluation for these raised accuracy significantly. In a few cases the\nprediction outperformed ground truth (e.g., better whitespace normalization and\ntemporal filtering). We learned that evaluation strategy design should occur\nearly in the process and be tuned to the specific domain (e.g., numeric\ntolerance for finance, column flexibility for messy schemas) to avoid marking\ncorrect solutions as failures.<\/p>\n<h4>False Negative Example<\/h4>\n<p>When asked to calculate the percentage of returns with warranty claims, the\nground truth returned:<\/p>\n<pre><code class=\"language-json\">{\"wcr_percent\": \"53.30\"}<\/code><\/pre>\n<p>While our agent returned:<\/p>\n<pre><code class=\"language-json\">{\"total_returns\": \"1500\", \"returns_with_warranty_claim\": \"799\", \"percent_with_warranty_claim\": \"53.30\"}<\/code><\/pre>\n<p>The percentage value was identical (53.30%), but the query was initially marked\nincorrect due to different column names and the inclusion of supporting detail\ncolumns. This highlights a fundamental evaluation challenge: we cannot simply\nmatch on column names because LLMs naturally generate different names than the\nground truth. We could have invested further. For example, using an LLM to infer\nmappings between expected and actual column names\u2014but we found that the accuracy\nscores we achieved were sufficiently good to demonstrate the effectiveness of\nagent-based approaches for NL-to-SQL tasks. Our evaluation strategy was\npragmatic: precise enough to validate the approach, without over-engineering\nbeyond what was needed for our conclusions.<\/p>\n<h2>Conclusion<\/h2>\n<p>NL-to-SQL generation with AI agents can achieve high success rates with the\nright approach. Our experiments demonstrate that combining runtime data querying\nwith comprehensive schema information (both technical metadata and domain\ndocumentation) is critical\u2014without it, accuracy drops significantly (by 13-79\npercentage points depending on the system). Model choice matters substantially:\nClaude Sonnet 4.5 reached 80.77% vs. GPT-5 Mini at 69.23% on identical tasks.\nInvest time trialing models to find the right fit for your schema, latency\/cost\nconstraints, and error tolerance.<\/p>\n<p>However, our work also revealed fundamental limitations, particularly the\nbusiness logic errors discussed earlier that require dedicated domain expertise\nto address.<\/p>\n<p>Key takeaways for practitioners:<\/p>\n<ol>\n<li><strong>Start with schema documentation and runtime validation<\/strong> &#8211; These provide\nthe highest ROI for accuracy improvements<\/li>\n<li><strong>Design evaluation early<\/strong> &#8211; Choose evaluation criteria that align with your\ndata characteristics and business goals. This ensures your experimentation\nactually measures what matters and reveals genuine improvements.<\/li>\n<li><strong>Expect business logic iteration<\/strong> &#8211; Budget for domain expert review and\ncontinuous refinement<\/li>\n<li><strong>Trial multiple models<\/strong> &#8211; Performance differences of 10-20 percentage\npoints are common<\/li>\n<\/ol>\n<p>NL-to-SQL is no longer a research problem, it&#8217;s an engineering challenge with\nknown solutions. The question is whether the remaining ~25.00% error rate fits\nyour use case, and whether you can iteratively address business logic gaps\nthrough domain knowledge and feedback loops.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Evaluating AI agents for NL-to-SQL generation across Azure Databricks AI\/BI Genie, GitHub Copilot CLI, and Microsoft Agent Framework. We achieved ~75% accuracy with schema documentation and runtime validation, while discovering that business logic errors represent a fundamental limitation requiring domain expertise.<\/p>\n","protected":false},"author":181523,"featured_media":16636,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[1,19],"tags":[3400],"class_list":["post-16635","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-cse","category-machine-learning","tag-ise"],"acf":[],"blog_post_summary":"<p>Evaluating AI agents for NL-to-SQL generation across Azure Databricks AI\/BI Genie, GitHub Copilot CLI, and Microsoft Agent Framework. We achieved ~75% accuracy with schema documentation and runtime validation, while discovering that business logic errors represent a fundamental limitation requiring domain expertise.<\/p>\n","_links":{"self":[{"href":"https:\/\/devblogs.microsoft.com\/ise\/wp-json\/wp\/v2\/posts\/16635","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/devblogs.microsoft.com\/ise\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/devblogs.microsoft.com\/ise\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/ise\/wp-json\/wp\/v2\/users\/181523"}],"replies":[{"embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/ise\/wp-json\/wp\/v2\/comments?post=16635"}],"version-history":[{"count":1,"href":"https:\/\/devblogs.microsoft.com\/ise\/wp-json\/wp\/v2\/posts\/16635\/revisions"}],"predecessor-version":[{"id":16637,"href":"https:\/\/devblogs.microsoft.com\/ise\/wp-json\/wp\/v2\/posts\/16635\/revisions\/16637"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/ise\/wp-json\/wp\/v2\/media\/16636"}],"wp:attachment":[{"href":"https:\/\/devblogs.microsoft.com\/ise\/wp-json\/wp\/v2\/media?parent=16635"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/ise\/wp-json\/wp\/v2\/categories?post=16635"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/ise\/wp-json\/wp\/v2\/tags?post=16635"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}