{"id":13984,"date":"2021-11-17T11:11:19","date_gmt":"2021-11-17T19:11:19","guid":{"rendered":"https:\/\/devblogs.microsoft.com\/cse\/?p=13984"},"modified":"2023-06-19T11:04:44","modified_gmt":"2023-06-19T18:04:44","slug":"building-an-azure-based-industrial-internet-of-things-solution-for-factory-automation","status":"publish","type":"post","link":"https:\/\/devblogs.microsoft.com\/ise\/building-an-azure-based-industrial-internet-of-things-solution-for-factory-automation\/","title":{"rendered":"Building an Azure based Industrial Internet of Things Solution for Factory Automation"},"content":{"rendered":"<h2>Introduction<\/h2>\n<p>In 2020, Microsoft CSE was involved in the process of a Smart Factory Transformation leveraging Microsoft technology, building an Industrial Internet of Things solution for factory automation. The customer was a global technology company that supplies systems for passenger cars, commercial vehicles, and industrial technology. Previously, their factories operated with a paper-based shop floor management process where production related data was documented manually with no analysis functionality. The purpose of the transformation was to improve the productivity of their plants and workers and improve the quality of the goods they produce.<\/p>\n<h2>Challenge and Objectives<\/h2>\n<p>The key goals of the project included:<\/p>\n<ul>\n<li>Monitor key performance indicators of machines running on the factory floor and configure which machines should be monitored to be able to react to any issues quickly before they affect the customer, while also enabling other monitoring benefits for shop floor management.<\/li>\n<li>Trace parts and batches end-to-end in their lifecycle with the ability to quantify waste in their value streams and identify issues across key steps.<\/li>\n<li>Leverage maintenance intelligence to reduce the downtime of machinery.<\/li>\n<\/ul>\n<p>We started to transform their flagship factory as a lighthouse project to a<strong> digital platform<\/strong> to pave the way for the transformation of over 100 of their factories.<\/p>\n<p>One dev crew from Microsoft worked on the connectivity part of the project for 15 months (organized into four project iterations). This part included collecting the OPC-UA based telemetry data on the edge within the plant, forwarding it to Azure for enrichment and storage. We also provided a rich set of APIs used by a custom user interface to view the data and manipulate the digital representation of the machines and signals in the cloud.<\/p>\n<p>This article has the goal of outlining the project from the connectivity point of view.<\/p>\n<h2>Project Goals<\/h2>\n<ol>\n<li>Empower a wider community to drive the digital transformation by connecting devices as an enabler.\n<ul>\n<li>We have a proven set of steps for onboarding OPC-UA capable devices.<\/li>\n<li>Onboarding process is observable.<\/li>\n<li>Configuration process is automated for KepserverEX with drivers for Siemens S7, Modbus and Euromap devices.<\/li>\n<li>GUI is available for configuration.<\/li>\n<li>We can publish and enrich machine-generated data as messages that can be selectively subscribed\/consumed by Factory Intelligence applications.<\/li>\n<\/ul>\n<\/li>\n<li>Shortening the commissioning of machinery.<\/li>\n<li>Increase security and stability of configuration changes by reducing operational risk caused by configuration changes.\n<ul>\n<li>Configuration history is available.<\/li>\n<li>Gap analysis using a threat Model process including a mitigation plan.<\/li>\n<li>RBAC model in place for controlling access to the solution.<\/li>\n<\/ul>\n<\/li>\n<li>Increase continuous data availability and data quality.\n<ul>\n<li>Successful device onboarding can be validated based on immediate feedback.<\/li>\n<\/ul>\n<\/li>\n<li>Scale the connectivity solution to enable the onboarding of additional factory plants.\n<ul>\n<li>Connectivity infrastructure supports stamp-based deployment architecture.<\/li>\n<li>Connectivity APIs are versioned.<\/li>\n<li>Exposed entities are uniquely identifiable.<\/li>\n<\/ul>\n<\/li>\n<\/ol>\n<h2>Architecture<\/h2>\n<p>To reduce the complexity of the Industrial Internet of Things solution architecture, we first broke it down into two parts. One part runs on the factory floor and is used to collect machine data and configure on-site resources. The second part is deployed to the cloud and makes up the data processing, enrichment, monitoring and visualization part of the system.<\/p>\n<h2>The Industrial Internet of Things Solution on the Edge<\/h2>\n<p><img decoding=\"async\" class=\"alignnone wp-image-13985\" src=\"https:\/\/devblogs.microsoft.com\/cse\/wp-content\/uploads\/sites\/55\/2021\/11\/word-image.png\" alt=\"Diagram of the Industrial Internet of Things solution architecture on the Edge \/ in the factory.\" width=\"1280\" height=\"720\" srcset=\"https:\/\/devblogs.microsoft.com\/ise\/wp-content\/uploads\/sites\/55\/2021\/11\/word-image.png 1280w, https:\/\/devblogs.microsoft.com\/ise\/wp-content\/uploads\/sites\/55\/2021\/11\/word-image-300x169.png 300w, https:\/\/devblogs.microsoft.com\/ise\/wp-content\/uploads\/sites\/55\/2021\/11\/word-image-1024x576.png 1024w, https:\/\/devblogs.microsoft.com\/ise\/wp-content\/uploads\/sites\/55\/2021\/11\/word-image-768x432.png 768w\" sizes=\"(max-width: 1280px) 100vw, 1280px\" \/><\/p>\n<p><em>Diagram of the Industrial Internet of Things solution architecture on the Edge \/ in the factory.<\/em><\/p>\n<p>The <strong>IoT Edge<\/strong> device is the only portion of the solution that needs to reside on the factory floor. All other components run on Microsoft Azure. Its task is to collect signals (telemetry data) from OPC-UA and non-OPC-UA devices. Non-OPC-UA devices are managed by an OPC-UA gateway which collects telemetry data from them and outputs it as OPC-UA signals. Currently, KepServerEx is in used as the gateway.<\/p>\n<p>The <strong>OPC Publisher<\/strong> edge module, which is part of the <a href=\"https:\/\/github.com\/Azure\/Industrial-IoT\">Industrial Internet of Things platform<\/a> offered by Microsoft on GitHub, collects all data from the OPC-UA devices and gateways and sends them to IoT Hub. It is configured via a JSON file which gets pushed by the Pipeline Publisher as a cloud-to-device direct message through IoT Hub to a custom Edge module called the <strong>Configuration Controller<\/strong>. The task of the Configuration Controller is to monitor which configurations are supposed to be active, request them via the module Twin, receive them in compressed chunks &#8211; as a single cloud-to-device direct message can contain up to 256Kb of data &#8211; decompress, assemble, and write them to a shared filesystem in the JSON format expected by the OPC Publisher module. OPC Publisher detects changes to this file and updates its OPC UA node subscriptions accordingly.<\/p>\n<p>The custom<strong> Gateway Configuration<\/strong> module is used by the Asset Registry to configure the gateway (KepServerEx) through its REST API, to onboard non-OPC-UA devices connected to the gateway. It can also connect to the gateway using opc.tcp and request sample metrics data from the newly configured devices to verify that they are working properly.<\/p>\n<p>The <strong>OPC Twin<\/strong> edge module, also a part of the Industrial Internet of Things platform, is used to browse OPC UA devices and return their OPC UA nodes as signals they can emit to the Asset Registry. It is also controlled via cloud-to-device direct methods sent by the Asset Registry through IoT Hub.<\/p>\n<p>The <strong>Metrics Collector<\/strong> module, a new addition to the Industrial Internet of Things Platform, collects health metrics from all Edge modules that publish data on a Prometheus endpoint and forwards them to a Log Analytics workspace. From there they are displayed in workbooks in IoT Hub, which are provided by Microsoft, in a custom dashboard that monitors the entire solution. Custom alerts are then triggered if the Edge devices are deemed unhealthy based on available disk space or system memory, device to cloud message queue length and the last time the Metrics Collector module has successfully transmitted metrics data.<\/p>\n<p><img decoding=\"async\" class=\"alignnone wp-image-13986\" src=\"https:\/\/devblogs.microsoft.com\/cse\/wp-content\/uploads\/sites\/55\/2021\/11\/word-image-1.png\" alt=\"IoT Hub Edge device metrics workbook: Edge module details and health.\" width=\"1430\" height=\"1072\" srcset=\"https:\/\/devblogs.microsoft.com\/ise\/wp-content\/uploads\/sites\/55\/2021\/11\/word-image-1.png 1430w, https:\/\/devblogs.microsoft.com\/ise\/wp-content\/uploads\/sites\/55\/2021\/11\/word-image-1-300x225.png 300w, https:\/\/devblogs.microsoft.com\/ise\/wp-content\/uploads\/sites\/55\/2021\/11\/word-image-1-1024x768.png 1024w, https:\/\/devblogs.microsoft.com\/ise\/wp-content\/uploads\/sites\/55\/2021\/11\/word-image-1-768x576.png 768w\" sizes=\"(max-width: 1430px) 100vw, 1430px\" \/><\/p>\n<p><em>IoT Hub Edge device metrics workbook: Edge module details and health.<\/em><\/p>\n<p><img decoding=\"async\" class=\"alignnone wp-image-13987\" src=\"https:\/\/devblogs.microsoft.com\/cse\/wp-content\/uploads\/sites\/55\/2021\/11\/word-image-2.png\" alt=\"IoT Edge device health overview in the solution\u2019s custom health dashboard.\" width=\"840\" height=\"374\" srcset=\"https:\/\/devblogs.microsoft.com\/ise\/wp-content\/uploads\/sites\/55\/2021\/11\/word-image-2.png 840w, https:\/\/devblogs.microsoft.com\/ise\/wp-content\/uploads\/sites\/55\/2021\/11\/word-image-2-300x134.png 300w, https:\/\/devblogs.microsoft.com\/ise\/wp-content\/uploads\/sites\/55\/2021\/11\/word-image-2-768x342.png 768w\" sizes=\"(max-width: 840px) 100vw, 840px\" \/><\/p>\n<p><em>IoT Edge device health overview in the solution\u2019s custom health dashboard.<\/em><\/p>\n<p>The shared filesystem also hosts certificates and a chain of trust used by the OPC Twin and OPC Publisher modules to securely communicate with the OPC-UA gateway.<\/p>\n<p>The Edge devices are automatically configured by IoT Hub with the proper Edge modules based on IoT Edge deployments in IoT Hub, as part of the release process.<\/p>\n<h2>The Industrial Internet of Things Solution in the Cloud<\/h2>\n<p><img decoding=\"async\" class=\"alignnone wp-image-13988\" src=\"https:\/\/devblogs.microsoft.com\/cse\/wp-content\/uploads\/sites\/55\/2021\/11\/graphical-user-interface-description-automaticall-8.png\" alt=\"Diagram of the Industrial Internet of Things solution architecture in the cloud.\" width=\"1280\" height=\"720\" srcset=\"https:\/\/devblogs.microsoft.com\/ise\/wp-content\/uploads\/sites\/55\/2021\/11\/graphical-user-interface-description-automaticall-8.png 1280w, https:\/\/devblogs.microsoft.com\/ise\/wp-content\/uploads\/sites\/55\/2021\/11\/graphical-user-interface-description-automaticall-8-300x169.png 300w, https:\/\/devblogs.microsoft.com\/ise\/wp-content\/uploads\/sites\/55\/2021\/11\/graphical-user-interface-description-automaticall-8-1024x576.png 1024w, https:\/\/devblogs.microsoft.com\/ise\/wp-content\/uploads\/sites\/55\/2021\/11\/graphical-user-interface-description-automaticall-8-768x432.png 768w\" sizes=\"(max-width: 1280px) 100vw, 1280px\" \/><\/p>\n<p><em>Diagram of the Industrial Internet of Things solution architecture in the cloud.<\/em><\/p>\n<p>The services in the cloud serve two purposes: First, the Asset Registry and Pipeline Configuration act as digital representation of all the assets in the factory that the system interacts with. Much of its functionality is exposed as a REST API which makes it a control panel of the entire system that the UI uses. Second, the Machine Data Enrichment pipeline makes the signal data from the factory, ingested as telemetry to IoT Hub or read from files by the File Signal Processor, available to the other participants in an enriched, cleaned, and understandable form.<\/p>\n<p>The <strong>Asset Registry<\/strong> is the source of truth for the entire system. It tracks all assets and their relationships, such as plants, devices, signals, gateways, and IoT Edge devices. Any change must go through the Asset Registry, and if necessary, it will synchronize this change to the external components. For example, if the user adds a gateway device to Asset Registry, it will go ahead and configure this device on KepServerEx through the Gateway Configuration Edge module on the factory floor.<\/p>\n<p>Internally, it\u2019s implemented as a shared Azure SQL database with several APIs:<\/p>\n<ul>\n<li><strong>Asset Registry API:<\/strong> a simple CRUD API that gives access to most entities in a uniform fashion.<\/li>\n<li><strong>Pipeline Configuration API <\/strong>and<strong> Pipeline Publisher API:<\/strong> APIs for signal pipeline management. We don\u2019t just configure individual signals, we allow users to bundle them in \u201cpipeline configurations\u201d, version them and activate all the signals in a given bundle at once. We also keep a history of active pipeline configurations, so the users can roll back to a previous pipeline configuration in case of an error, for example.<\/li>\n<li><strong>Frontend API<\/strong> \u2013 manually optimized, complex, read-only SQL queries to display data in the UI when CRUD operations aren\u2019t enough.<\/li>\n<\/ul>\n<p><img decoding=\"async\" class=\"alignnone wp-image-13989\" src=\"https:\/\/devblogs.microsoft.com\/cse\/wp-content\/uploads\/sites\/55\/2021\/11\/word-image-3.png\" alt=\"The Industrial Internet of Things solution Asset Registry Data Model.\" width=\"1426\" height=\"606\" srcset=\"https:\/\/devblogs.microsoft.com\/ise\/wp-content\/uploads\/sites\/55\/2021\/11\/word-image-3.png 1426w, https:\/\/devblogs.microsoft.com\/ise\/wp-content\/uploads\/sites\/55\/2021\/11\/word-image-3-300x127.png 300w, https:\/\/devblogs.microsoft.com\/ise\/wp-content\/uploads\/sites\/55\/2021\/11\/word-image-3-1024x435.png 1024w, https:\/\/devblogs.microsoft.com\/ise\/wp-content\/uploads\/sites\/55\/2021\/11\/word-image-3-768x326.png 768w\" sizes=\"(max-width: 1426px) 100vw, 1426px\" \/><\/p>\n<p><em>The Industrial Internet of Things solution Asset Registry Data Model.<\/em><\/p>\n<p><strong>Machine Data Enrichment<\/strong> is a data pipeline that transforms a stream of \u201craw\u201d telemetry signals coming from the factory into the form usable by the other internal teams \/ applications.<\/p>\n<p>We found <strong>Stream Analytics<\/strong> the ideal Azure service for this goal. It can enrich a stream of events with the reference data obtained from the Asset Registry SQL database. It then can filter or replicate the stream to multiple destinations. All signals get forwarded automatically to the hot path, an <strong>Azure Event Hub<\/strong>. Signals requiring long retention times also get sent to the cold path, an <strong>Azure Data Lake<\/strong>. Signals that couldn\u2019t be properly enriched are added to a file in <strong>Azure Storage <\/strong>for easy review. All of this happens with minimal configuration effort and with very little added latency.<\/p>\n<p>Consumers of signal data can then choose which data source to use. For example, to assess signal quality, we save all signals from the Hot Path to <strong>Azure<\/strong> <strong>Data Explorer<\/strong> and built a <strong>Dashboard<\/strong> on top to visualize it.<\/p>\n<p>In the middle of the engagement, we got a requirement to support a new, second source of signal data, namely CSV based. These signals are collected from devices and uploaded as files, and the Azure Data Explorer enrichment pipeline schema proved quite flexible to support this. We had two choices: add a second input to the existing Stream Analytics job or create a new Stream Analytics job to handle this new input stream. Ultimately, we chose the second option because the new event stream had a different event schema and would also arrive in large batches every few hours.<\/p>\n<p>Another feature that the <strong>Machine Data Enrichment<\/strong> pipeline enables is tagging signals based on the use case they enable, identified by specialist that understands both the asset and the scenario. Tags are an array of predefined strings that include information about the signal in context such as: \u201cFault of type X\u201d or \u201cRelevant for end-to-end traceability\u201d. Signals can then later be queried based on their tags.<\/p>\n<p>The solution also consists of a web-based <strong>User Interface<\/strong> which allows the customer to interact with and configure the Asset Registry and Pipeline Configuration.<\/p>\n<h2>Architectural Design Choices<\/h2>\n<h3>Why not use IoT Central?<\/h3>\n<p>Even though IoT Central offers a quick path for\u00a0creating powerful IoT solutions\u00a0in Azure with a PaaS offering, we have decided not to use it. The solution did not meet some of our key customer requirements: integration with AAD, device level authorization, extensibility, and geo-presence. Moreover, much of the value added by IoT Central such as visualization and reporting were already covered by a framework that the customer has in place.<\/p>\n<h3>Why OPC-UA and not MQTT?<\/h3>\n<p>Some gateways support forwarding events into MQTT. Leveraging MQTT would facilitate the work in publishing data to the cloud as we would be able to connect it directly to Edge Hub or Azure IoT Hub. However, the vision of the client, backed by the Azure Industrial Internet of Things architecture, is that by standardizing on OPC-UA, we would be able to eliminate the requirement of using a gateway when the shop floor machinery is updated. Moreover, OPC-UA offers features such as discoverability which enable scenarios that MQTT alone wouldn\u2019t.<\/p>\n<h3>Why use KepServerEx\u00a0and not\u00a0another gateway?<\/h3>\n<p>We decided to rely on partners to translate proprietary protocols instead of relying on adding custom components from each vendor. When choosing a partner\u00a0gateway,\u00a0we decided with the customer to start with\u00a0KepServerEx as\u00a0it supports most protocols the customer has in all its worldwide factories.\u00a0The client has experience in using the product, and it exposes the configuration through a REST API which was used in the project to remotely configure the gateway.<\/p>\n<h3>Service Bus vs Event Grid<\/h3>\n<p>The first use case for Service Bus is to consume Twin changes\u00a0from Azure IoT Hub\u00a0to\u00a0determine and coordinate data pipeline configurations being pushed to the edge.<\/p>\n<p>Service Bus was our choice as it fulfils our requirements which include competing consumers, pub\/sub, and ordering of messages as well as alignment with the overall system architecture. We did not want to add an additional Azure service to solve a single problem.<\/p>\n<h3>SQL Server vs\u00a0Cosmos Db vs\u00a0Azure Digital Twin<\/h3>\n<p>Our requirements from a database point of view are to store the following information:<\/p>\n<ul>\n<li>Machine semantic model\u00a0and\u00a0mapping to\u00a0OPC-UA\u00a0resources (servers,\u00a0endpoints,\u00a0and nodes).<\/li>\n<li>Data pipeline configuration (signals,\u00a0transformations,\u00a0and enrichments)<\/li>\n<\/ul>\n<p>We believe that Azure Digital Twins would be a good fit to solve the family tree problem (enterprises, divisions, factories, production lines, work center, etc.). The family tree data, however, is not owned by this project, only machine data is. Therefore, the project wouldn\u2019t be using many of Azure Digital Twins\u2019 features such as feeding telemetry into twins, getting twin statuses or alerts. For this reason, we have decided not to use it.<\/p>\n<p>The requirements we have for a database\u00a0match what a relational database offers:<\/p>\n<ul>\n<li>Relationship and integrity (machines\u00a0-&gt; devices\u00a0-&gt; signals\u00a0-&gt; signal\u00a0type)<\/li>\n<li>Temporal data (Example: this\u00a0device was assigned to this machine\u00a0until 2 days ago)<\/li>\n<li>Integration with Stream Analytics: One of the options for\u00a0data enrichment\u00a0is to use\u00a0Stream Analytics\u00a0to augment the telemetry data with machine metadata.<\/li>\n<\/ul>\n<p>We decided to use SQL Server as it supports the requirements and is a\u00a0product\u00a0that the customer already had experience\u00a0with.<\/p>\n<h3>Data Destination Storage Options \u2013 Azure Data Explorer vs Time Series Insights, SQL Server or Cosmos DB<\/h3>\n<p>A single source for enriched machine generated data as queryable database for the solution was also required. The technologies considered for this were: Azure Data Explorer, Time Series Insights, SQL Server and Cosmos DB.<\/p>\n<p>We have chosen Azure Data Explorer because we require an append-only, low latency analytical database. ADX also offers an out of the box solution for data retention, clean-up and cost optimization with a flexible <a href=\"https:\/\/docs.microsoft.com\/azure\/data-explorer\/kusto\/management\/cachepolicy\">caching policy<\/a>. And, as this technology was already being used in the project, we were able to reuse the existing cluster and no additional expertise or monitoring was required.<\/p>\n<p>Azure Data Explorer has good support for PowerBI for quick visualization of the data, but in the end, we have used <a href=\"https:\/\/docs.microsoft.com\/azure\/data-explorer\/azure-data-explorer-dashboards\">Azure Data Explorer Dashboards<\/a> as it was the easiest and fastest solution for our visualization requirements.<\/p>\n<h2>Testability<\/h2>\n<p>The architecture allows us to run end-to-end tests entirely automated and in the cloud. The Edge device can run in a virtual machine with an added Edge module, the <a href=\"https:\/\/docs.microsoft.com\/samples\/azure-samples\/iot-edge-opc-plc\/azure-iot-sample-opc-ua-server\/\">opc-plc server<\/a>. This is a stand-alone, easy to configure OPC-UA server that can act as the gateways or OPC-UA devices without the need of any additional infrastructure. The Edge modules can connect to it in the same way they do in the factory, just using a different address.<\/p>\n<p>As the cloud services all expose REST APIs, the end-to-end tests can use them to provision a blank database with all the data needed, create pipeline configurations, deploy them to the Edge and monitor the connected IoT Hub\u2019s Event Hub compatible endpoint to see that the expected data is being received. With only minor changes, the same end-to-end tests can also trigger the File Based Signal processor to pick up a pre-defined file from a blob store and make sure it processes the file-based signals correctly.<\/p>\n<h2>Observability<\/h2>\n<p>We use several features from the <strong>Azure Monitor<\/strong> suite to build robust observability of the whole fleet:<\/p>\n<ul>\n<li><strong>Standard Metrics<\/strong> that all Azure services export. They are especially useful for monitoring the Machine Data Enrichment pipeline that consists exclusively of Azure services.<\/li>\n<li><strong>Custom Metrics<\/strong> that we export from our custom C# applications to Azure Monitor. We use them to measure among other things business logic error rates and throughput of the File Based Signal processor.<\/li>\n<li><strong>Metrics Collector<\/strong> module that exports module metrics from IoT Edge running on prem to Azure Monitor in the cloud.<\/li>\n<li>We set up several<strong> Custom Dashboards<\/strong> where we display selected, important metrics. There is one global dashboard that describes the overall service health, and multiple specific dashboards showing detailed information on topics like the signal enrichment pipeline health or signal quality.<\/li>\n<li>We have <strong>Alerts<\/strong> for the key metrics and get notified when any part of the service is unhealthy or is likely to become unhealthy soon. We have also tried to limit the number of alerts to avoid alert fatigue.<\/li>\n<li><strong>Distributed Tracing<\/strong> gives us visibility into how our services interact with each other, which is especially useful when troubleshooting requests that failed or are too slow. The Application Insights .NET SDK collects distributed traces automatically for all synchronous calls between our custom code and Azure services. We also extended this to work with asynchronous calls by explicitly passing the trace ID along with any asynchronous request and continue the trace with this ID on the recipient\u2019s side.<\/li>\n<li><strong>Log Analytics<\/strong> was used extensively to create log queries with data in Azure Monitor Logs. These queries surface as visuals in the custom dashboard or as the triggers for alerts.<\/li>\n<\/ul>\n<p>We found it beneficial to add the <strong>Application Insights SDK<\/strong> to all our custom C# apps early in their development. It automatically collects distributed traces and exceptions from the application, which helps with both local and remote troubleshooting.<\/p>\n<p><img decoding=\"async\" class=\"alignnone wp-image-13990\" src=\"https:\/\/devblogs.microsoft.com\/cse\/wp-content\/uploads\/sites\/55\/2021\/11\/word-image-4.png\" alt=\"Tracing the browsing of OPC-UA devices between the Asset Registry and the Edge in Application Insights.\" width=\"1429\" height=\"829\" srcset=\"https:\/\/devblogs.microsoft.com\/ise\/wp-content\/uploads\/sites\/55\/2021\/11\/word-image-4.png 1429w, https:\/\/devblogs.microsoft.com\/ise\/wp-content\/uploads\/sites\/55\/2021\/11\/word-image-4-300x174.png 300w, https:\/\/devblogs.microsoft.com\/ise\/wp-content\/uploads\/sites\/55\/2021\/11\/word-image-4-1024x594.png 1024w, https:\/\/devblogs.microsoft.com\/ise\/wp-content\/uploads\/sites\/55\/2021\/11\/word-image-4-768x446.png 768w\" sizes=\"(max-width: 1429px) 100vw, 1429px\" \/><\/p>\n<p><em>Tracing the browsing of OPC-UA devices between the Asset Registry and the Edge in Application Insights.<\/em><\/p>\n<h2>Conclusion<\/h2>\n<p>By the end of the engagement, the customer learned a lot about building industrial Internet of Things solutions using Microsoft Azure services. We from the Microsoft team learned about the challenges of running software on the Edge in a factory and interfacing with various OPC-UA servers and gateways. The project goals were all met.<\/p>\n<p>Some lessons learned from us include that we benefited from using a domain-driven design to develop the models and build a ubiquitous language and a mutual understanding of the domain together with the customer. Also, documenting the principles with code samples has helped to educate and align the developers.<\/p>\n<p>We chose to approach our problems in a pragmatic way and focus on simplicity while resisting the urge to over-engineer the solution and practice Conference Driven Development.<\/p>\n<p>Consistently starting the development of new components with architectural design reviews, writing the Markdown files together using Visual Studio Live share and gathering know-how by doing short spikes to test the viability of ideas proved valuable. This ensured we included everyone from the team in the envisioning phase and allowed us to detect potential gaps or issues in our approaches that could lead to problems or dead ends later.<\/p>\n<p>Finally, spending some initial time discussing and deciding on the project structure, naming conventions and adding linters and code styling rules early to the project helped everyone to work together very quickly. This was especially important as we were pair-programming between developers from different companies. This was further helped by constantly keeping Teams chats open that could be joined at any time by peers working on the same part of the solution.<\/p>\n<h2>Acknowledgements<\/h2>\n<p>Contributors to the solution from the Microsoft team listed in alphabetical order by last name: <a href=\"https:\/\/www.linkedin.com\/in\/francisco-beltrao-58521a\/\">Francisco Beltrao<\/a>, <a href=\"https:\/\/www.linkedin.com\/in\/mikhailchatillon\/\">Mikhail Chatillon<\/a>, <a href=\"https:\/\/www.linkedin.com\/in\/saschacorti\/\">Sascha Corti<\/a>, <a href=\"https:\/\/www.linkedin.com\/in\/laura-damian-05aab65\">Laura Damian<\/a>, <a href=\"https:\/\/www.linkedin.com\/in\/gukov\/\">Konstantin Gukov<\/a>, <a href=\"https:\/\/www.linkedin.com\/in\/gretajocyte\/\">Greta Jocyte<\/a>, <a href=\"https:\/\/www.linkedin.com\/in\/anaig-marechal\/\">Anaig Marechal<\/a>, <a href=\"https:\/\/www.linkedin.com\/in\/martin-weber-ch\/\">Martin Weber<\/a>. We had great support from <a href=\"https:\/\/www.linkedin.com\/in\/williamberryiii\/\">Bill Berry<\/a>, Ian Davis, and <a href=\"https:\/\/www.linkedin.com\/in\/llieberman\/\">Larry Lieberman<\/a> from the CSE IoT SA Team who built a generator for a KEPServerEX configuration API library and received great feedback and support from the Microsoft IoT Team in Germany, namely <a href=\"https:\/\/www.linkedin.com\/in\/vitaliy-slepakov-904308153\/\">Vitaly Slepakov<\/a> and <a href=\"https:\/\/www.linkedin.com\/in\/hans-gschossmann-02b4531\/\">Hans Gscho\u00dfmann<\/a>.<\/p>\n<h2>Resources<\/h2>\n<p>Azure Industrial Intrnet o Things Platform Overview:\n<a href=\"https:\/\/docs.microsoft.com\/azure\/industrial-iot\/overview-what-is-industrial-iot\">https:\/\/docs.microsoft.com\/azure\/industrial-iot\/overview-what-is-industrial-iot<\/a><\/p>\n<p>Azure Industrial Internet of Things Platform Repository:\n<a href=\"https:\/\/github.com\/Azure\/Industrial-IoT\">https:\/\/github.com\/Azure\/Industrial-IoT<\/a><\/p>\n<p>OPC Publisher Edge Module:\n<a href=\"https:\/\/azure.github.io\/Industrial-IoT\/modules\/publisher.html\">https:\/\/azure.github.io\/Industrial-IoT\/modules\/publisher.html<\/a><\/p>\n<p>OPC Twin Edge Module:\n<a href=\"https:\/\/azure.github.io\/Industrial-IoT\/modules\/twin.html\">https:\/\/azure.github.io\/Industrial-IoT\/modules\/twin.html<\/a><\/p>\n<p>KepServerEx:\n<a href=\"https:\/\/www.kepware.com\/en-us\/products\/kepserverex\">https:\/\/www.kepware.com\/en-us\/products\/kepserverex<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Microsoft CSE was involved in the process of a smart factory transformation in the automotive space, leveraging Microsoft technology, building an Industrial Internet of Things solution for factory automation.<\/p>\n","protected":false},"author":75443,"featured_media":13997,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[1,18],"tags":[3323,60,3326,3324,3325,216],"class_list":["post-13984","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-cse","category-iot","tag-automotive","tag-azure","tag-factory-automation","tag-industrial-internet-of-things","tag-industry-4-0","tag-iot"],"acf":[],"blog_post_summary":"<p>Microsoft CSE was involved in the process of a smart factory transformation in the automotive space, leveraging Microsoft technology, building an Industrial Internet of Things solution for factory automation.<\/p>\n","_links":{"self":[{"href":"https:\/\/devblogs.microsoft.com\/ise\/wp-json\/wp\/v2\/posts\/13984","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/devblogs.microsoft.com\/ise\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/devblogs.microsoft.com\/ise\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/ise\/wp-json\/wp\/v2\/users\/75443"}],"replies":[{"embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/ise\/wp-json\/wp\/v2\/comments?post=13984"}],"version-history":[{"count":0,"href":"https:\/\/devblogs.microsoft.com\/ise\/wp-json\/wp\/v2\/posts\/13984\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/ise\/wp-json\/wp\/v2\/media\/13997"}],"wp:attachment":[{"href":"https:\/\/devblogs.microsoft.com\/ise\/wp-json\/wp\/v2\/media?parent=13984"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/ise\/wp-json\/wp\/v2\/categories?post=13984"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/ise\/wp-json\/wp\/v2\/tags?post=13984"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}