{"id":2271,"date":"2023-07-11T07:00:38","date_gmt":"2023-07-11T14:00:38","guid":{"rendered":"https:\/\/devblogs.microsoft.com\/azure-sql\/?p=2271"},"modified":"2023-11-10T06:31:29","modified_gmt":"2023-11-10T14:31:29","slug":"how-to-export-azure-sql-managed-instance-data-into-fabric-lakehouse","status":"publish","type":"post","link":"https:\/\/devblogs.microsoft.com\/azure-sql\/how-to-export-azure-sql-managed-instance-data-into-fabric-lakehouse\/","title":{"rendered":"How to export Azure SQL Managed Instance data into Fabric Lakehouse"},"content":{"rendered":"<p>If you want to perform advanced analytics on your data and create stunning reports in Power BI, you should consider using <a href=\"https:\/\/learn.microsoft.com\/en-us\/fabric\/get-started\/microsoft-fabric-overview\">Microsoft Fabric<\/a>. <a href=\"https:\/\/learn.microsoft.com\/en-us\/fabric\/get-started\/microsoft-fabric-overview\">Microsoft Fabric<\/a> is a solution that combines the best Lakehouse and Warehouse technologies to provide a scalable, secure, and cost-effective platform for data analytics. With <a href=\"https:\/\/learn.microsoft.com\/en-us\/fabric\/get-started\/microsoft-fabric-overview\">Microsoft Fabric<\/a>, you can store your data in Fabric <a href=\"https:\/\/learn.microsoft.com\/en-us\/fabric\/data-engineering\/lakehouse-overview\">Lakehouses<\/a> or <a href=\"https:\/\/learn.microsoft.com\/en-us\/fabric\/data-warehouse\/data-warehousing\">Warehouses<\/a>. Both options are optimized for fast query performance and easy integration with Power BI. However, if your data is in other databases, such as <a href=\"https:\/\/learn.microsoft.com\/en-us\/azure\/azure-sql\/managed-instance\/sql-managed-instance-paas-overview?view=azuresql-mi\">Azure SQL Managed Instance<\/a>, you would need to find a way to copy data into Fabric to analyze it.<\/p>\n<p>In the <a href=\"https:\/\/devblogs.microsoft.com\/azure-sql\/how-to-copy-data-from-azure-sql-managed-instance-to-microsoft-fabric-dw\/\">previous blog post<\/a>, I explained how to export data from your Azure SQL Managed Instance to Fabric Warehouse, where you can run analytics queries on it. But sometimes you may want to do some data engineering with Spark before you analyze the data. That&#8217;s why Fabric offers Lakehouse, an artifact that combines Spark, Fabric One Lake, the built-in unified storage layer for your data, and Warehouse endpoint for analytics.<\/p>\n<p>In this blog post, I will show how to export data from Managed Instance and store it directly in the One Lake that is linked to the Lakehouse.<\/p>\n<p><a href=\"https:\/\/devblogs.microsoft.com\/azure-sql\/wp-content\/uploads\/sites\/56\/2023\/07\/mi2lh.png\"><img decoding=\"async\" class=\"alignnone wp-image-2278 size-full\" src=\"https:\/\/devblogs.microsoft.com\/azure-sql\/wp-content\/uploads\/sites\/56\/2023\/07\/mi2lh.png\" alt=\"Image mi2lh\" width=\"995\" height=\"614\" srcset=\"https:\/\/devblogs.microsoft.com\/azure-sql\/wp-content\/uploads\/sites\/56\/2023\/07\/mi2lh.png 995w, https:\/\/devblogs.microsoft.com\/azure-sql\/wp-content\/uploads\/sites\/56\/2023\/07\/mi2lh-300x185.png 300w, https:\/\/devblogs.microsoft.com\/azure-sql\/wp-content\/uploads\/sites\/56\/2023\/07\/mi2lh-768x474.png 768w\" sizes=\"(max-width: 995px) 100vw, 995px\" \/><\/a><\/p>\n<p>Every Lakehouse artifact has its own part of One Lake, separated into two locations:<\/p>\n<ul>\n<li><strong>\/Tables<\/strong>\u00a0where you store the files in Delta format that are ready for analytics (gold layer).<\/li>\n<li><strong>\/Files<\/strong>\u00a0where you store the files that need Spark processing before you convert them to Delta and use them in the reports.<\/li>\n<\/ul>\n<p>You can choose either one as the destination for your export process, depending on your needs. In this article, we will use <strong>\/Files<\/strong>.<\/p>\n<p>We will use the CETAS T-SQL command to export table data from Managed Instance to One Lake and then the Spark script to transform data and convert it to Delta format.<\/p>\n<p>The steps we will follow are:<\/p>\n<ul>\n<li>Enable data export functionality on Azure SQL Managed Instance using PowerShell (if it is not already enabled).<\/li>\n<li>Find a path in the Lakehouse where we will store the data and enable Azure SQL Managed Instance to write to that location.<\/li>\n<li>Create external data source that points to One Lake, database scoped credential that uses Managed instance identity for authentication to One Lake storage, and Parquet file format for exporting data.<\/li>\n<li>Run the CETAS command on Managed Instance that will export the table data to One Lake.<\/li>\n<li>Transform the exported Parquet files to Delta format so they can be accessed with SQL endpoint and Fabric Warehouse.<\/li>\n<\/ul>\n<h2>Enable export functionality on Managed Instance<\/h2>\n<p>Managed Instance enables you to export the content of tables to the external lake storage using CETAS command. However, this option is not enabled by default.<\/p>\n<p>To enable data exporting on Managed Instance using PowerShell, you need to install the <strong>Az.Sql<\/strong> module using this command <strong>Install-Module -Name Az.Sql<\/strong>.<\/p>\n<p>After installing the module, you need a PowerShell script that will login to your Azure account, select the subscription that contains your Managed Instance, and set the server configuration option with name \u201callowPolybaseExport\u201d to 1. You can use the following PowerShell commands to do that:<\/p>\n<pre class=\"prettyprint language-py\"><code class=\"language-py\">$SubscriptionID = \"&lt;your-subscription-id&gt;\"\r\nLogin-AzAccount -SubscriptionName $SubscriptionID\r\nSet-AzSqlServerConfigurationOption\r\n            -ResourceGroupName \"&lt;your-resource-group-name&gt;\"\r\n<\/code><code class=\"language-py\">            -InstanceName \"&lt;your-managed-instance-name&gt;\"\r\n            -Name \"allowPolybaseExport\" -Value 1<\/code><\/pre>\n<p>Replace <strong>&lt;your-subscription-id&gt;<\/strong> with your actual subscription ID, <strong>&lt;your-resource-group-name&gt;<\/strong> and <strong>&lt;your-managed-instance-name&gt;<\/strong> with your actual resource group name and the Managed Instance name.<\/p>\n<p>This option enables the export of data from Managed Instance to external data sources using the CETAS T-SQL command.<\/p>\n<h2>Setup external access<\/h2>\n<p>Now you need to set up external data sources, database scoped credentials, and Parquet file format on Managed Instance to export data as Parquet files to Azure storage.<\/p>\n<p>To set up these objects, you need to execute some T-SQL commands in your database. These commands will create the necessary objects to access and write data to Lakehouse\u2019s One Lake location using the identity of Managed Instance:<\/p>\n<pre class=\"prettyprint language-sql\"><code class=\"language-sql\">CREATE MASTER KEY\r\n\u00a0\u00a0\u00a0\u00a0\u00a0 ENCRYPTION BY PASSWORD = '&lt;put your master password here&gt;'\r\n\r\nCREATE EXTERNAL FILE FORMAT MigrationDemoFileFormat\r\nWITH(\r\n\u00a0\u00a0\u00a0\u00a0\u00a0 FORMAT_TYPE=PARQUET,\r\n\u00a0\u00a0\u00a0\u00a0\u00a0 DATA_COMPRESSION = 'org.apache.hadoop.io.compress.SnappyCodec'\r\n)\r\n\r\nCREATE DATABASE SCOPED CREDENTIAL MigrationDemoMiCredential\r\n\u00a0\u00a0\u00a0\u00a0\u00a0 WITH IDENTITY = 'Managed Identity'\r\n\r\nCREATE EXTERNAL DATA SOURCE MigrationDemoDataSource\r\nWITH (\r\n\u00a0\u00a0\u00a0\u00a0\u00a0 LOCATION = '&lt;your-one-lake-url&gt;',\r\n\u00a0\u00a0\u00a0\u00a0\u00a0 CREDENTIAL = MigrationDemoMiCredential\r\n)<\/code><\/pre>\n<p>Replace &lt;your-one-lake-url&gt; with the path to One Lake location where you will export the data. To get this path, open your Fabric Lakehouse, select \/Files or \/Tables and open the properties you will see the One Lake path that you can use as a &lt;your-one-lake-url&gt; value in the location:<\/p>\n<p><a href=\"https:\/\/devblogs.microsoft.com\/azure-sql\/wp-content\/uploads\/sites\/56\/2023\/07\/one-lake-properties.png\"><img decoding=\"async\" class=\"alignnone wp-image-2279 size-full\" src=\"https:\/\/devblogs.microsoft.com\/azure-sql\/wp-content\/uploads\/sites\/56\/2023\/07\/one-lake-properties.png\" alt=\"Image one lake properties\" width=\"1780\" height=\"821\" srcset=\"https:\/\/devblogs.microsoft.com\/azure-sql\/wp-content\/uploads\/sites\/56\/2023\/07\/one-lake-properties.png 1780w, https:\/\/devblogs.microsoft.com\/azure-sql\/wp-content\/uploads\/sites\/56\/2023\/07\/one-lake-properties-300x138.png 300w, https:\/\/devblogs.microsoft.com\/azure-sql\/wp-content\/uploads\/sites\/56\/2023\/07\/one-lake-properties-1024x472.png 1024w, https:\/\/devblogs.microsoft.com\/azure-sql\/wp-content\/uploads\/sites\/56\/2023\/07\/one-lake-properties-768x354.png 768w, https:\/\/devblogs.microsoft.com\/azure-sql\/wp-content\/uploads\/sites\/56\/2023\/07\/one-lake-properties-1536x708.png 1536w\" sizes=\"(max-width: 1780px) 100vw, 1780px\" \/><\/a><\/p>\n<p>ABFS path points to the folder that you selected in the Explorer.<\/p>\n<p><div class=\"alert alert-info\"><p class=\"alert-divider\"><i class=\"fabric-icon fabric-icon--Info\"><\/i><strong>Update your ABFS path!<\/strong><\/p> To make this work, you cannot simply copy the ABFS path. You must make some modifications:<\/p>\n<ul>\n<li>Instead of <strong>abfss:\/\/<\/strong> prefix use <strong>abs:\/\/<\/strong> prefix that is available on Managed Instance<\/li>\n<li>Replace <strong>.dfs.<\/strong> with <strong>.blob.<\/strong><\/li>\n<\/ul>\n<p>For example, if the ABFS path to your One Lake location is <strong>abfss<\/strong>:\/\/MigrationDemo@&lt;your-one-lake&gt;.<strong>dfs<\/strong>.fabric.microsoft.com\/MyLakehouse.Lakehouse\/Files, you should convert it to <strong>abs<\/strong>:\/\/MigrationDemo@&lt;your-one-lake&gt;.<strong>blob<\/strong>.fabric.microsoft.com\/MyLakehouse.Lakehouse\/Files<\/p>\n<p>At the time of writing this article I could not export data using .dfs. in URI.<\/p>\n<p><\/div><\/p>\n<p>Another important step is to enable your Azure SQL Managed Instance to write data into your One Lake. You should find the name of your Managed Instance &#8211; for example, &#8220;my-AzSQL-mi&#8221;. Then you should go to your Fabric Workspace, choose \u201cManage access\u201d option and add this name as a service principal.<\/p>\n<p><a href=\"https:\/\/devblogs.microsoft.com\/azure-sql\/wp-content\/uploads\/sites\/56\/2023\/07\/workspace-access.png\"><img decoding=\"async\" class=\"alignnone wp-image-2281 size-full\" src=\"https:\/\/devblogs.microsoft.com\/azure-sql\/wp-content\/uploads\/sites\/56\/2023\/07\/workspace-access.png\" alt=\"Image workspace access\" width=\"1827\" height=\"711\" srcset=\"https:\/\/devblogs.microsoft.com\/azure-sql\/wp-content\/uploads\/sites\/56\/2023\/07\/workspace-access.png 1827w, https:\/\/devblogs.microsoft.com\/azure-sql\/wp-content\/uploads\/sites\/56\/2023\/07\/workspace-access-300x117.png 300w, https:\/\/devblogs.microsoft.com\/azure-sql\/wp-content\/uploads\/sites\/56\/2023\/07\/workspace-access-1024x399.png 1024w, https:\/\/devblogs.microsoft.com\/azure-sql\/wp-content\/uploads\/sites\/56\/2023\/07\/workspace-access-768x299.png 768w, https:\/\/devblogs.microsoft.com\/azure-sql\/wp-content\/uploads\/sites\/56\/2023\/07\/workspace-access-1536x598.png 1536w\" sizes=\"(max-width: 1827px) 100vw, 1827px\" \/><\/a><\/p>\n<p><div class=\"alert alert-primary\"><p class=\"alert-divider\"><i class=\"fabric-icon fabric-icon--Info\"><\/i><strong>SAS is not supported<\/strong><\/p>At the time of writing this article I could not use SAS key to access One Lake. Check the Fabric documentation to see if this is possible.<\/div><\/p>\n<p>Now you are ready to export your data to Fabric Lakehouse.<\/p>\n<h2>Exporting data to Lakehouse<\/h2>\n<p>To export the content of a table in Managed Instance database to Fabric One Lake, you can use the CREATE EXTERNAL TABLE AS SELECT (CETAS) command. This command will create an external table that points to a folder in your Fabric One Lake and write the result of a query to that folder in Parquet files. The command will also create the folder if it does not exist.<\/p>\n<p>The following command will export the data from the [Sales].[SalesOrderHeader] table to a folder named SalesOrders in your One Lake container that is referenced with MigrationDemoDataSource. It will use the external data source and file format that you created before.<\/p>\n<p><a href=\"https:\/\/devblogs.microsoft.com\/azure-sql\/wp-content\/uploads\/sites\/56\/2023\/07\/CETAS.png\"><img decoding=\"async\" class=\"alignnone size-full wp-image-2274\" src=\"https:\/\/devblogs.microsoft.com\/azure-sql\/wp-content\/uploads\/sites\/56\/2023\/07\/CETAS.png\" alt=\"Image CETAS\" width=\"1910\" height=\"759\" srcset=\"https:\/\/devblogs.microsoft.com\/azure-sql\/wp-content\/uploads\/sites\/56\/2023\/07\/CETAS.png 1910w, https:\/\/devblogs.microsoft.com\/azure-sql\/wp-content\/uploads\/sites\/56\/2023\/07\/CETAS-300x119.png 300w, https:\/\/devblogs.microsoft.com\/azure-sql\/wp-content\/uploads\/sites\/56\/2023\/07\/CETAS-1024x407.png 1024w, https:\/\/devblogs.microsoft.com\/azure-sql\/wp-content\/uploads\/sites\/56\/2023\/07\/CETAS-768x305.png 768w, https:\/\/devblogs.microsoft.com\/azure-sql\/wp-content\/uploads\/sites\/56\/2023\/07\/CETAS-1536x610.png 1536w\" sizes=\"(max-width: 1910px) 100vw, 1910px\" \/><\/a><\/p>\n<p>You can see that it exported 30K records in 2 seconds. After executing the command, you can verify that the content is exported on One Lake storage by browsing the Lakehouse files. You should see one or more Parquet files with the data from the table.<\/p>\n<p><a href=\"https:\/\/devblogs.microsoft.com\/azure-sql\/wp-content\/uploads\/sites\/56\/2023\/07\/exported-files-1.png\"><img decoding=\"async\" class=\"alignnone wp-image-2277 size-full\" src=\"https:\/\/devblogs.microsoft.com\/azure-sql\/wp-content\/uploads\/sites\/56\/2023\/07\/exported-files-1.png\" alt=\"Image exported files\" width=\"1769\" height=\"819\" srcset=\"https:\/\/devblogs.microsoft.com\/azure-sql\/wp-content\/uploads\/sites\/56\/2023\/07\/exported-files-1.png 1769w, https:\/\/devblogs.microsoft.com\/azure-sql\/wp-content\/uploads\/sites\/56\/2023\/07\/exported-files-1-300x139.png 300w, https:\/\/devblogs.microsoft.com\/azure-sql\/wp-content\/uploads\/sites\/56\/2023\/07\/exported-files-1-1024x474.png 1024w, https:\/\/devblogs.microsoft.com\/azure-sql\/wp-content\/uploads\/sites\/56\/2023\/07\/exported-files-1-768x356.png 768w, https:\/\/devblogs.microsoft.com\/azure-sql\/wp-content\/uploads\/sites\/56\/2023\/07\/exported-files-1-1536x711.png 1536w\" sizes=\"(max-width: 1769px) 100vw, 1769px\" \/><\/a><\/p>\n<p>Now your files are in the Lakehouse staging area, and you can convert them to Delta table or transform\/clean data before conversion.<\/p>\n<h2>Convert Parquet to Delta<\/h2>\n<p>When you export data from Managed Instance to One Lake, you may use Parquet as the intermediate format. However, this format is not ideal for analytics in Fabric. That&#8217;s why you should convert your Parquet files to Delta, the recommended format for Fabric.<\/p>\n<p>To convert your Parquet files to Delta, you need to consider where you exported data in One Lake. If you have exported your data to the \/Tables location, you should convert the files right away because \/Tables should only have Delta files. This location is where you store the files that are ready for analytics (gold layer) and should not contain other types.<\/p>\n<p>If you exported your data to the \/Files location, which is more suitable for staging\/bronze\/silver layer, you can combine them with other files, clean data and then move them to Delta in \/Tables. This location is where you store the files that require Spark processing before analytics.<\/p>\n<p>Make sure that the final Delta table is in the \/Tables location to be visible in the SQL endpoint for Lakehouse and to enable Fabric Warehouse to access it via SQL endpoint. The SQL endpoint allows you to run SQL queries on your data in Lakehouse using any SQL client.<\/p>\n<p>The easiest way to convert Parquet to Delta is to choose the folder and pick an option load to table. This action will load the Parquet files into a Lakehouse table that will be available in Warehouse and ready for analytics.<\/p>\n<p><a href=\"https:\/\/devblogs.microsoft.com\/azure-sql\/wp-content\/uploads\/sites\/56\/2023\/07\/convert-to-delta.png\"><img decoding=\"async\" class=\"alignnone wp-image-2275 size-full\" src=\"https:\/\/devblogs.microsoft.com\/azure-sql\/wp-content\/uploads\/sites\/56\/2023\/07\/convert-to-delta.png\" alt=\"Image convert to delta\" width=\"1661\" height=\"394\" srcset=\"https:\/\/devblogs.microsoft.com\/azure-sql\/wp-content\/uploads\/sites\/56\/2023\/07\/convert-to-delta.png 1661w, https:\/\/devblogs.microsoft.com\/azure-sql\/wp-content\/uploads\/sites\/56\/2023\/07\/convert-to-delta-300x71.png 300w, https:\/\/devblogs.microsoft.com\/azure-sql\/wp-content\/uploads\/sites\/56\/2023\/07\/convert-to-delta-1024x243.png 1024w, https:\/\/devblogs.microsoft.com\/azure-sql\/wp-content\/uploads\/sites\/56\/2023\/07\/convert-to-delta-768x182.png 768w, https:\/\/devblogs.microsoft.com\/azure-sql\/wp-content\/uploads\/sites\/56\/2023\/07\/convert-to-delta-1536x364.png 1536w\" sizes=\"(max-width: 1661px) 100vw, 1661px\" \/><\/a><\/p>\n<p>As a result, you will see the delta table in the \/Tables area.<\/p>\n<p><a href=\"https:\/\/devblogs.microsoft.com\/azure-sql\/wp-content\/uploads\/sites\/56\/2023\/07\/delta-table.png\"><img decoding=\"async\" class=\"alignnone wp-image-2276 size-full\" src=\"https:\/\/devblogs.microsoft.com\/azure-sql\/wp-content\/uploads\/sites\/56\/2023\/07\/delta-table.png\" alt=\"Image delta table\" width=\"1663\" height=\"640\" srcset=\"https:\/\/devblogs.microsoft.com\/azure-sql\/wp-content\/uploads\/sites\/56\/2023\/07\/delta-table.png 1663w, https:\/\/devblogs.microsoft.com\/azure-sql\/wp-content\/uploads\/sites\/56\/2023\/07\/delta-table-300x115.png 300w, https:\/\/devblogs.microsoft.com\/azure-sql\/wp-content\/uploads\/sites\/56\/2023\/07\/delta-table-1024x394.png 1024w, https:\/\/devblogs.microsoft.com\/azure-sql\/wp-content\/uploads\/sites\/56\/2023\/07\/delta-table-768x296.png 768w, https:\/\/devblogs.microsoft.com\/azure-sql\/wp-content\/uploads\/sites\/56\/2023\/07\/delta-table-1536x591.png 1536w\" sizes=\"(max-width: 1663px) 100vw, 1663px\" \/><\/a><\/p>\n<p>Alternatively, you can create a Spark notebook that will convert Parquet to Delta or do some cleaning before conversion. Using a Spark notebook is also a straightforward way to transform data because even if you don&#8217;t make any changes, because you just need to run the following code as the last step:<\/p>\n<pre class=\"prettyprint language-py\"><code class=\"language-py\">from delta.tables import *\r\n\r\nDeltaTable.convertToDelta(spark,\r\n   \"parquet.`abfss:\/\/&lt;one-lake-location&gt;\/MyLakehouse.Lakehouse\/Files\/ExportedFiles`\")<\/code><\/pre>\n<p>See more details in <a href=\"https:\/\/techcommunity.microsoft.com\/t5\/azure-synapse-analytics-blog\/convert-plain-parquet-files-to-delta-lake-format-using-apache\/ba-p\/1454662\">this blog post<\/a>.<\/p>\n<p>Use the notebooks if the default conversion doesn&#8217;t work and you need to convert data types or clean data to Delta. For example, you might export some SQL column types that are available in Parquet but not in Delta. In this case, you might get errors like UUID column is not supported. In that case, you can either remove the columns in CETAS, cast them in CETAS to a supported type, or transform your Parquet column using a notebook before you convert it to Delta.<\/p>\n<h2>Query the table in Fabric warehouse<\/h2>\n<p>After you convert the files to Delta in the \/Tables folder, you can access the table from the SQL endpoint for the Fabric Lakehouse. You can see the table name in the list of tables in the SQL endpoint UI.<\/p>\n<p>If you want to use the Lakehouse table data in a warehouse, you <strong>don&#8217;t need to copy data from Lakehouse to Warehouse<\/strong>. You can query the Lakehouse data from the warehouse using the 3-part name convention &lt;lakehouse name&gt;.dbo.&lt;table name&gt;.<\/p>\n<p><a href=\"https:\/\/devblogs.microsoft.com\/azure-sql\/wp-content\/uploads\/sites\/56\/2023\/07\/wh-lh-query.png\"><img decoding=\"async\" class=\"alignnone wp-image-2280 size-full\" src=\"https:\/\/devblogs.microsoft.com\/azure-sql\/wp-content\/uploads\/sites\/56\/2023\/07\/wh-lh-query.png\" alt=\"Image wh lh query\" width=\"1920\" height=\"1000\" srcset=\"https:\/\/devblogs.microsoft.com\/azure-sql\/wp-content\/uploads\/sites\/56\/2023\/07\/wh-lh-query.png 1920w, https:\/\/devblogs.microsoft.com\/azure-sql\/wp-content\/uploads\/sites\/56\/2023\/07\/wh-lh-query-300x156.png 300w, https:\/\/devblogs.microsoft.com\/azure-sql\/wp-content\/uploads\/sites\/56\/2023\/07\/wh-lh-query-1024x533.png 1024w, https:\/\/devblogs.microsoft.com\/azure-sql\/wp-content\/uploads\/sites\/56\/2023\/07\/wh-lh-query-768x400.png 768w, https:\/\/devblogs.microsoft.com\/azure-sql\/wp-content\/uploads\/sites\/56\/2023\/07\/wh-lh-query-1536x800.png 1536w\" sizes=\"(max-width: 1920px) 100vw, 1920px\" \/><\/a><\/p>\n<p>You can see that I&#8217;m connected to a Fabric Warehouse named MyWarehouse, and I&#8217;m querying the table in the Fabric Lakehouse named MyLakehouse.<\/p>\n<p>This works if the Fabric Warehouse and the Fabric Lakehouse are in the same workspace.<\/p>\n<h2>Conclusion<\/h2>\n<p>This blog post shows you how to export data from Azure SQL Managed Instance to Microsoft Fabric Lakehouse using the CETAS command. This is a great solution for SQL developers who want a simple and performant way to transfer data using SQL scripts. The transfer is done using Parquet format, which is highly compressed and reduces the export time even for large tables. You can easily automate this process using code or T-SQL script and export multiple tables from your Managed Instance. Exporting to Lakehouse also allows you to refine your data further before you analyze it. You can use either SQL endpoint to access the data directly or access the exported data from Fabric Warehouse without copying the data.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>If you want to perform advanced analytics on your data and create stunning reports in Power BI, you should consider using Microsoft Fabric. Microsoft Fabric is a solution that combines the best Lakehouse and Warehouse technologies to provide a scalable, secure, and cost-effective platform for data analytics. With Microsoft Fabric, you can store your data [&hellip;]<\/p>\n","protected":false},"author":32538,"featured_media":2278,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[453,1,572,570,571],"tags":[439,574,575],"class_list":["post-2271","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-apache-spark","category-azure-sql","category-managed-instance","category-microsoft-fabric","category-warehouse","tag-export","tag-lakehouse","tag-parquet"],"acf":[],"blog_post_summary":"<p>If you want to perform advanced analytics on your data and create stunning reports in Power BI, you should consider using Microsoft Fabric. Microsoft Fabric is a solution that combines the best Lakehouse and Warehouse technologies to provide a scalable, secure, and cost-effective platform for data analytics. With Microsoft Fabric, you can store your data [&hellip;]<\/p>\n","_links":{"self":[{"href":"https:\/\/devblogs.microsoft.com\/azure-sql\/wp-json\/wp\/v2\/posts\/2271","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/devblogs.microsoft.com\/azure-sql\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/devblogs.microsoft.com\/azure-sql\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/azure-sql\/wp-json\/wp\/v2\/users\/32538"}],"replies":[{"embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/azure-sql\/wp-json\/wp\/v2\/comments?post=2271"}],"version-history":[{"count":0,"href":"https:\/\/devblogs.microsoft.com\/azure-sql\/wp-json\/wp\/v2\/posts\/2271\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/azure-sql\/wp-json\/wp\/v2\/media\/2278"}],"wp:attachment":[{"href":"https:\/\/devblogs.microsoft.com\/azure-sql\/wp-json\/wp\/v2\/media?parent=2271"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/azure-sql\/wp-json\/wp\/v2\/categories?post=2271"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/azure-sql\/wp-json\/wp\/v2\/tags?post=2271"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}