{"id":14751,"date":"2023-06-01T03:00:00","date_gmt":"2023-06-01T10:00:00","guid":{"rendered":"https:\/\/devblogs.microsoft.com\/cse\/?p=14751"},"modified":"2024-07-18T11:50:30","modified_gmt":"2024-07-18T18:50:30","slug":"workflow-engine-on-dtfx","status":"publish","type":"post","link":"https:\/\/devblogs.microsoft.com\/ise\/workflow-engine-on-dtfx\/","title":{"rendered":"Building a custom workflow engine on top of Durable Task Framework DTFx"},"content":{"rendered":"<h2>Introduction<\/h2>\n<p>In late 2022, we were approached by a large customer in the automotive industry who asked us to help them implement a\nself-service solution with the ability for bi-directional communication to their PLCs (Programmable Logic Controllers).\nBefore our engagement with them, PLC operators would need to manually configure each of the relevant PLCs on the factory\nfloor to obtain desired output\/results. Our project helped eliminate or simplify these manual steps, which in turn\nhelped the customer iterate and scale production faster, while potentially cutting down on manual errors.<\/p>\n<p>Given that production of such hardware components requires a series of steps to be executed in sequence as a <em>workflow<\/em>,\nwe found that workflow engines would be good candidates to base our solution upon. From the variety of existing workflow\nengines we selected the <a href=\"https:\/\/github.com\/Azure\/durabletask\">Durable Task Framework (DTFx)<\/a> due to its performance\ncharacteristics, broad capabilities, big community and Microsoft support.<\/p>\n<p>However, out-of-the-box DTFx also did not meet all of our requirements and we built some features on top of it. With this\nblog post, we detail what we built, explaining the why and how. We hope this information will be helpful if you are\nconsidering using DTFx and want to tailor it to your specific needs.<\/p>\n<h2>Brief overview of requirements and mapping to the features we built<\/h2>\n<p>A fundamental requirement of the solution was to be <em>self-service<\/em> in the sense that factory operators should be able\nto define workflows to cover future use-cases. Given that operators often lack coding skills, we addressed this\nrequirement with the introduction of a <a href=\"#domain-specific-language-dsl\">Domain-specific language (DSL)<\/a> that acts as an\nabstraction layer, enabling operators to create workflows in an easy and user-friendly way.<\/p>\n<p>Another requirement was to be able to influence the workflow execution based on input provided externally at workflow\nexecution time or depending on values generated while running the workflow e.g., the current value of a PLC node. To\naddress this we introduced <a href=\"#dynamic-expressions-and-data-flow\">dynamic expressions and a data-flow<\/a> to pass data from a\nworkflow step to subsequent steps.<\/p>\n<p>The workflows we are dealing with have (write) access to machines on the factory floor, so <a href=\"#validation-of-dynamic-expressions\">validation of dynamic\nexpressions<\/a> and <a href=\"#workflow-validation\">the workflow as a whole<\/a> is crucial to\nensure safety and communicate issues earlier to factory operators.<\/p>\n<p>In some cases, workflows could take a long time to be completed or even all together hang. This could happen due to\nvarious reasons, like an incorrect information in a workflow configuration or transient network issues on the factory\nfloor. To avoid this and recover gracefully, we provided a way to handle <a href=\"#workflow-timeout-and-cancellation\">workflow timeouts and\ncancellations<\/a>.<\/p>\n<p>In other cases, workflows need to execute a certain &#8220;cleanup&#8221; action independently of the result of the execution. To\ncover this requirement we added a <a href=\"#workflow-closure-step\">workflow closure step<\/a>.<\/p>\n<h2>Contents<\/h2>\n<ul>\n<li><a href=\"#domain-specific-language-dsl\">Domain-specific language (DSL)<\/a><\/li>\n<li><a href=\"#dynamic-expressions-and-data-flow\">Dynamic expressions and data flow<\/a><\/li>\n<li><a href=\"#workflow-validation\">Workflow validation<\/a><\/li>\n<li><a href=\"#workflow-timeout-and-cancellation\">Workflow timeout and cancellation<\/a><\/li>\n<li><a href=\"#workflow-closure-step\">Workflow closure step<\/a><\/li>\n<li><a href=\"#summary\">Summary<\/a><\/li>\n<\/ul>\n<h2>Domain-specific language (DSL)<\/h2>\n<p>In DTFx workflows are exclusively defined through code. To empower factory operators with the ability to define workflow\nsteps without coding, we recognized the need for a Domain-specific Language (DSL). Besides the fact that operators often\nlack coding skills, adding workflows via code would also be error-prone, so we aimed to create a DSL that would act as\nan abstraction layer, enabling operators to create workflows in an easy and user-friendly way. As an example, operators\nwouldn&#8217;t have to enter detailed information on how and where PLC nodes can be reached because this information was\n&#8220;enriched&#8221; from our backend, minimizing the workflow definition inputs required of operators.<\/p>\n<p>Although, eventually end users will use a UI to interact with the solution, which will generate the underlying workflow\ndefinition JSON, having a well-designed DSL was important to onboard users fast and even before the UI was ready. It\nwould also support more advanced future use-cases like workflow definition versioning and having definition JSON\ngenerated by external systems.<\/p>\n<p>To achieve this, the concept of <em>workflow definition<\/em> was separated from <em>workflow configuration<\/em>. You can think of the\ndefinition being a &#8220;compile-time&#8221; construct which uses user-facing terms like signals, versus configuration being a\n&#8220;runtime&#8221; construct, containing all details needed to execute the workflow.<\/p>\n<p>As for DSL base language, we chose JSON over Yaml due to easier writing and better support from C# libraries. The fact\nthat DTFx uses JSON internally to serialize its state, was another reason to choose JSON.<\/p>\n<h2>Dynamic expressions and data flow<\/h2>\n<p>As we saw previously, the workflow definition time is not the same as the workflow execution time. We can think of it as\ncompile-time vs runtime. With the term dynamic expressions, we refer to expressions that are evaluated at &#8220;runtime&#8221;,\ni.e., when a workflow is executed. This allows the PLC operators to influence the workflow execution based on input\nprovided externally at execution time or values generated while running the workflow e.g., the current value of a PLC\nnode.<\/p>\n<blockquote><p>\u2755 Tip: Verify whether a static workflow configuration is sufficient for your business needs or whether workflow\nexecution could vary depending on runtime parameters. Need for runtime arguments and dynamic expressions might\ninfluence the overall design of the engine, so it&#8217;s highly recommended to identify these needs as early as possible.<\/p><\/blockquote>\n<p>To support dynamic execution of workflows, we relied heavily on <a href=\"https:\/\/dynamic-linq.net\/\">Dynamic Linq<\/a> functionality\nand more specifically for:<\/p>\n<ul>\n<li>parsing, evaluating as well as validating dynamic expressions<\/li>\n<li>storing execution input values as well as generated values at runtime.<\/li>\n<\/ul>\n<h3>Dynamic expression example: if-condition<\/h3>\n<p>One example of dynamic expressions are control structures which are an essential part of any programming language,\nincluding a custom DSL. Control structures allow end-users (in our case PLC operators) to specify conditions, loops, and\nbranching statements, among others, that dictate how a workflow would execute. The exact type and number of control\nstructures for your custom DSL will depend on its intended purpose and the business needs.<\/p>\n<blockquote><p>\u2755 Tip: Consider which control structures make most sense for your use-case and are both expressive and easy to use for\nyour end-users and start with those.<\/p><\/blockquote>\n<p>In our use-case we identified if-conditions as a fundamental control structure to start our implementation from. We\nrelied on Dynamic Linq <a href=\"https:\/\/dynamic-linq.net\/advanced-parse-lambda\">DynamicExpressionParser.ParseLambda<\/a> method that\ncreates a\n<a href=\"https:\/\/learn.microsoft.com\/dotnet\/api\/system.linq.expressions.lambdaexpression?view=net-7.0\">LambdaExpression<\/a> out of\na condition provided as a string. This LambdaExpression can then be compiled to create a <code>Delegate<\/code> that can be invoked\nusing the <a href=\"https:\/\/learn.microsoft.com\/dotnet\/api\/system.delegate.dynamicinvoke?view=net-7.0\">DynamicInvoke<\/a> method on\nit, as shown below.<\/p>\n<pre><code class=\"language-csharp\">var condition = \"1 == 1\";\r\nDelegate parsedDelegate = null;\r\ntry\r\n{\r\n    LambdaExpression parsedCondition = DynamicExpressionParser.ParseLambda(\r\n        new ParsingConfig(),\r\n        Array.Empty&lt;ParameterExpression&gt;(),\r\n        typeof(bool),\r\n        condition);\r\n    parsedDelegate = parsedCondition.Compile();\r\n}\r\ncatch (Exception e) when (e is ParseException or InvalidOperationException)\r\n{\r\n    \/\/ handle any exceptions during parsing the condition\r\n}\r\n\r\nif ((bool)parsedDelegate?.DynamicInvoke(null))\r\n{\r\n    \/\/ execute then branch\r\n}\r\nelse\r\n{\r\n    \/\/ execute else branch\r\n}<\/code><\/pre>\n<h3>Data flow: workflow inputs and runtime data<\/h3>\n<p>The execution plan of a workflow could be influenced by input parameters on execution time or by values that were\ngenerated from previous steps of the workflow. Both of these were stored as properties in an instance of\n<code>System.Linq.Dynamic.Core.DynamicClass<\/code>.<\/p>\n<p>Assuming the workflow inputs are represented using the following class:<\/p>\n<pre><code class=\"language-csharp\">class WorkflowInputParameter\r\n{\r\n    public string Name;\r\n    public string Type;\r\n}<\/code><\/pre>\n<p>The following sample code shows how we populated the workflowData for the input parameters. A similar approach was used\nto pass outputs of previous steps to the next steps of the workflow.<\/p>\n<pre><code class=\"language-csharp\">\/\/ parameters as defined in compile-time\r\nvar workflowParameters = new List&lt;WorkflowInputParameter&gt;()\r\n{\r\n    new () { Name = \"temperature-threshold\", Type = \"System.Int32\" }\r\n};\r\n\r\n\/\/ convert to DynamicProperties\r\nList&lt;DynamicProperty&gt; dynamicProperties = workflowParameters.Select(p =&gt; new DynamicProperty(p.Name, Type.GetType(p.Type))).ToList();\r\n\r\n\/\/ create DynamicClass to hold values\r\nType workflowType = DynamicClassFactory.CreateType(dynamicProperties);\r\nDynamicClass workflowData = (DynamicClass)System.Activator.CreateInstance(workflowType) !; \/\/ consider handling nulls explicitly\r\n\r\n\/\/ actual values passed in runtime\r\nvar parameterValues = new Dictionary&lt;string, int&gt;()\r\n{\r\n    {\"temperature-threshold\", 14}\r\n};\r\nworkflowParameters.ForEach(\r\n    p =&gt; workflowData.SetDynamicPropertyValue(p.Name, Convert.ChangeType(parameterValues.GetValueOrDefault(p.Name), Type.GetType(p.Type))));<\/code><\/pre>\n<h3>Validation of dynamic expressions<\/h3>\n<p>Validation of dynamic expressions is crucial in order to provide early feedback to the user in case of errors or prevent\nexecution of unwanted code snippets which could constitute a security issue.<\/p>\n<blockquote><p>\u2755 Tip: Consider validation if you are allowing dynamic expressions in your workflows to ensure no malicious code can\nbe executed and errors can be communicated to the end-user early.<\/p><\/blockquote>\n<p>In our case, syntactic validation (e.g. a step is attempting to use data from a previous step that does not exist) is\nhappening via Dynamic Linq <a href=\"https:\/\/dynamic-linq.net\/advanced-parse-lambda\">DynamicExpressionParser.ParseLambda<\/a> that\nthrows a <code>ParseException<\/code> as shown in the previous snippets. This can happen among others when a member does not exist,\nor a parameter is not of proper type.<\/p>\n<p>Regarding validation from a security point of view, Dynamic Linq already restricts the attack surface by allowing access\nonly to a pre-defined <a href=\"https:\/\/dynamic-linq.net\/expression-language#accessible-types\">set of types<\/a>: only primitive\ntypes and types from the <code>System.Math<\/code> and <code>System.Convert<\/code> namespaces are accessible. This can be configured\/extended\nusing the <a href=\"https:\/\/dynamic-linq.net\/advanced-extending\">DynamicLinqType attribute<\/a> on a custom type.<\/p>\n<p>Further restrictions can be introduced by using the <a href=\"https:\/\/en.wikipedia.org\/wiki\/Visitor_pattern\">Visitor pattern<\/a> and\nextending the\n<a href=\"https:\/\/learn.microsoft.com\/dotnet\/api\/system.linq.expressions.dynamicexpressionvisitor\">DynamicExpressionVisitor<\/a>. For\nexample, in order to check method calls the following snippet can be helpful. The same pattern can be used to check\noperators, fields, properties etc.<\/p>\n<pre><code class=\"language-csharp\">\/* code omitted for brevity *\/\r\n\r\n\/\/ expression as a string\r\nstring expression = \"Convert.ToInt32(data.ReadOpc1.Signals[\\\"signalB\\\"].Value) &gt; data.IntParam\";\r\nLambdaExpression parsedExpression = DynamicExpressionParser.ParseLambda(new[] { dataParam }, typeof(object), expression);\r\n\r\n\/\/ initiate expression visit\r\nWorkflowExpressionVisitor visitor = new ();\r\nvisitor.Visit(parsedExpression);\r\n\r\n\/\/ Visitor class\r\npublic class WorkflowExpressionVisitor : DynamicExpressionVisitor\r\n{\r\n    protected override Expression VisitMethodCall(MethodCallExpression node)\r\n    {\r\n        \/\/ perform any required checks e.g. that the declaring type\r\n        \/\/ or the method called is whitelisted\r\n\r\n        return base.VisitMethodCall(node);\r\n    }\r\n\r\n}<\/code><\/pre>\n<h2>Workflow validation<\/h2>\n<p>Besides the validation of expressions that we just covered, validation of the whole workflow definition helps build a\nmore user-friendly engine by detecting possible errors or misconfigurations and providing feedback to end-users early.<\/p>\n<blockquote><p>\u2755 Tip: Consider validating as early as possible to give feedback to end-users. In this feedback you can potentially\ninclude all issues found, to allow for resolution of them at once.<\/p><\/blockquote>\n<p>Some basic validation is already provided if you rely on concrete types instead of a generic <code>object<\/code> when you\ndeserialize the workflow definition. We relied also on attributes to specify required JSON properties and implemented\ncustom validation based on business rules (using FluentValidation in C#).<\/p>\n<p>What is more, instead of primitive types (like <code>int<\/code> or <code>string<\/code>) consider using custom types for fields that have\nbusiness value or hold domain semantics. This way if there is a need to refactor these in the future due to business changes,\nyou can better encapsulate changes.<\/p>\n<h2>Workflow timeout and cancellation<\/h2>\n<p>In some cases, workflows could take a long time to be completed. This could happen due to various reasons,\nlike an incorrect information in a workflow configuration or transient network issues on the factory floor.<\/p>\n<p>In our specific customer scenario, where the workflow engine runs on a factory edge, workflows need to finish\nand free up system resources as fast as possible. Having a workflow running for a long time without any response\nis undesired and would in most cases mean some kind of misconfiguration or network issue.\nTo handle this case, we were asked to give a user an opportunity to specify a <em>timeout value<\/em> for the entire workflow.<\/p>\n<p>To implement this feature we followed the approach recommended in the\n<a href=\"https:\/\/learn.microsoft.com\/azure\/azure-functions\/durable\/durable-functions-error-handling?tabs=csharp-inproc#function-timeouts\">official documentation for Azure Durable Functions<\/a>.<\/p>\n<p>The implementation makes use of a <a href=\"https:\/\/github.com\/Azure\/durabletask\/wiki\/Feature---Durable-Timers\">Durable Timer<\/a>\nprovided by the DTFx framework. The workflow orchestrator (our custom class) waits for the timer and the scheduled\nactivities, and based on which task finishes first we know if the workflow timed out and we should cancel it. In case of\nthe timeout, we tell DTFx to purge the running orchestration, so that it can be cleaned up. However, purging an entire\norchestration in DTFx <em>doesn&#8217;t cancel already running activities<\/em>.<\/p>\n<p>Hence, we needed a way to supply a cancellation token down to each activity in the workflow. Since there can be\nmultiple orchestrations running in parallel, the cancellation token must be <em>unique for each orchestration<\/em>. To solve\nthis, we implemented a class that maps a DTFx orchestration context to a <code>CancellationTokenSource<\/code> and stores this map\nin memory:<\/p>\n<blockquote><p>Note: The approach explained below works only if the engine and all workflows run on a single node and it won&#8217;t work\nin a distributed scenario. Although the Durable Task Framework (DTFx) is designed for building distributed workflows\nand even supports the execution of activities on different machines, we don&#8217;t make use of this feature and\nensure the engine runs on just one node, to take out the complexity of distributed environment.<\/p><\/blockquote>\n<pre><code class=\"language-csharp\">public class OrchestrationCancellationTokenRegistry : IOrchestrationCancellationTokenRegistry\r\n{\r\n    \/\/\/ &lt;summary&gt;\r\n    \/\/\/ Holds the map between DTFx orchestrations and cancellation tokens.\r\n    \/\/\/ &lt;\/summary&gt;\r\n    internal ConcurrentDictionary&lt;OrchestrationInstance, CancellationTokenSource&gt; CancellationTokenSources { get; private set; } = new ();\r\n\r\n    \/\/\/ &lt;summary&gt;\r\n    \/\/\/ Requests cancellation on the token for a workflow.\r\n    \/\/\/ &lt;\/summary&gt;\r\n    void IOrchestrationCancellationTokenRegistry.Cancel(OrchestrationInstance instance)\r\n    {\r\n        if (CancellationTokenSources.TryGetValue(instance, out CancellationTokenSource? cts))\r\n        {\r\n            cts?.Cancel();\r\n        }\r\n    }\r\n\r\n    \/\/\/ &lt;summary&gt;\r\n    \/\/\/ Creates a new token for a workflow execution.\r\n    \/\/\/ &lt;\/summary&gt;\r\n    void IOrchestrationCancellationTokenRegistry.Create(OrchestrationInstance instance)\r\n        =&gt; CancellationTokenSources.TryAdd(instance, new CancellationTokenSource());\r\n\r\n    \/\/\/ &lt;summary&gt;\r\n    \/\/\/ Gets a token for the workflow.\r\n    \/\/\/ &lt;\/summary&gt;\r\n    CancellationToken IOrchestrationCancellationTokenRegistry.Get(OrchestrationInstance instance)\r\n        =&gt; CancellationTokenSources[instance].Token;\r\n\r\n    \/\/\/ &lt;summary&gt;\r\n    \/\/\/ Removes a token for a workflow.\r\n    \/\/\/ &lt;\/summary&gt;\r\n    void IOrchestrationCancellationTokenRegistry.Remove(OrchestrationInstance instance)\r\n        =&gt; CancellationTokenSources.TryRemove(instance, out _);\r\n}<\/code><\/pre>\n<p>The following code snippet show the <code>RunTask()<\/code> method of our custom workflow orchestrator,\nwhich inherits from the DTFx <code>TaskOrchestration<\/code> class:<\/p>\n<pre><code class=\"language-csharp\">private readonly IOrchestrationCancellationTokenRegistry _cancellationRegistry;\r\n\r\n  \/\/\/ &lt;summary&gt;\r\n  \/\/\/ Runs a workflow. Triggered by DTFx.\r\n  \/\/\/ &lt;\/summary&gt;\r\n  \/\/\/ &lt;param name=\"context\"&gt;Orchestrator context.&lt;\/param&gt;\r\n  \/\/\/ &lt;param name=\"workflowContext\"&gt;Workflow context.&lt;\/param&gt;\r\n  public override async Task&lt;DynamicClass&gt; RunTask(OrchestrationContext context, WorkflowContext workflowContext)\r\n  {\r\n      \/\/ Setup cancellation tokens\r\n      CancellationTokenSource timerCancellationSource = new ();\r\n      _cancellationRegistry.Create(context.OrchestrationInstance);\r\n\r\n      try\r\n      {\r\n          \/\/ Setup cancellation timer\r\n          Task&lt;bool&gt; timer = context.CreateTimer(\r\n              context.CurrentUtcDateTime.AddMilliseconds(workflowContext.WorkflowTimeout),\r\n              true,\r\n              timerCancellationSource.Token);\r\n\r\n          \/\/ Schedule all activities in the workflow\r\n          Task allActivities = ScheduleAllActivities(...);\r\n\r\n          \/\/ wait for any of them to complete\r\n          Task first = await Task.WhenAny(timer, allActivities);\r\n\r\n          if (first.Id == allActivities.Id)\r\n          {\r\n              \/* Activities finished first *\/\r\n\r\n              \/\/ cancel timer\r\n              timerCancellationSource.Cancel();\r\n          }\r\n          else\r\n          {\r\n              \/\/ cancel activities\r\n              workflowContext.WorkflowIsCancelled = true;\r\n              _cancellationRegistry.Cancel(context.OrchestrationInstance);\r\n          }\r\n      }\r\n      finally\r\n      {\r\n          \/\/ clean up cancellation\r\n          _cancellationRegistry.Remove(context.OrchestrationInstance);\r\n      }\r\n\r\n      return workflowContext.WorkflowData;\r\n  }<\/code><\/pre>\n<p>Each activity, scheduled on a <code>OrchestrationContext<\/code> has access to the current <code>OrchestrationInstance<\/code> through the\n<code>DurableTask.Core.TaskContext<\/code> class. By injecting the <code>IOrchestrationCancellationTokenRegistry<\/code> it can get the\ncancellation token for the currently running orchestration, set it to throw an exception if cancellation was requested\nand use the token in any other method calls it might make.<\/p>\n<pre><code class=\"language-csharp\">public class MyActivity\r\n{\r\n  private readonly IOrchestrationCancellationTokenRegistry _cancellationRegistry;\r\n\r\n  protected override async Task&lt;TOutput&gt; ExecuteAsync(TaskContext context, TInput input)\r\n  {\r\n          \/\/ Get cancellation token from registry - this one will signal if the workflow times out.\r\n          CancellationToken workflowTimeOutToken = _cancellationRegistry.Get(context.OrchestrationInstance);\r\n          try\r\n          {\r\n              \/\/ throw if workflow times out.\r\n              workflowTimeOutToken.ThrowIfCancellationRequested();\r\n\r\n              return await ExecuteCodeActivityAsync(input, workflowTimeOutToken);\r\n          }\r\n          catch (TaskCanceledException)\r\n          {\r\n            \/\/ handle cancellation\r\n          }\r\n      }\r\n  }\r\n\r\n}<\/code><\/pre>\n<p>This approach ensures that in case a workflow times out, all activities will be cancelled, including the already running\nones.<\/p>\n<h2>Workflow closure step<\/h2>\n<p>Another specific feature we built on top of DTFx is <em>workflow closure step<\/em>.<\/p>\n<p>In Durable Task Framework (DTFx), when an activity is scheduled using ScheduleTask(), the DTFx runtime creates a new\ntask for that activity and schedules it for execution. If the scheduled activity throws an unhandled exception, the DTFx\nruntime will catch the exception and escalate it to the orchestrator function. If the exception is not caught and\nhandled in the orchestrator, the orchestrator will mark the entire orchestration as failed and stop executing subsequent\nactivities.<\/p>\n<p>In our scenario this would mean that a workflow could stop executing after any step and potentially leave the\nenvironment (PLCs) in an inconsistent state. Since this is a highly undesirable outcome, we needed to provide a way for\noperators to specify a special step in a workflow definition, which would <em>always<\/em> execute, regardless of successful\ncompletion of other activities. Developers can think of it as a try\/finally construct. Moreover, it is actually\nimplemented using try\/finally syntax in C#.<\/p>\n<p>The <em>closure step<\/em> or <em>closure activity<\/em> is a normal workflow activity. The only difference to other activities is <em>when<\/em>\nit gets scheduled.<\/p>\n<p>The following code snippet illustrates our implementation of the closure step.<\/p>\n<pre><code class=\"language-csharp\">\/*\r\nSome parts of the code are omitted for brevity and to better highlight the implementation of the closure.\r\n*\/\r\n\r\nprivate static async Task&lt;DynamicClass&gt; RunTaskInternal(OrchestrationContext context, WorkflowContext workflowContext, RunnableWorkflowConfiguration runnableWorkflowConfiguration)\r\n{\r\n    try\r\n    {\r\n        foreach (var activity in runnableWorkflowConfiguration.Activities)\r\n        {\r\n            await context.ScheduleTask&lt;object&gt;(activity.ActivityType,[...]);\r\n        }\r\n    }\r\n    finally\r\n    {\r\n        if (runnableWorkflowConfiguration.Closure is { })\r\n        {\r\n            await context.ScheduleTask&lt;object&gt;(runnableWorkflowConfiguration.Closure.ActivityType, [...]);\r\n        }\r\n    }\r\n\r\n    return workflowContext.WorkflowData;\r\n}<\/code><\/pre>\n<p>The <code>RunTaskInternal()<\/code> method is a method of our custom <code>WorkflowOrchestrator<\/code> and is used to execute a workflow. The\n<code>runnableWorkflowConfiguration<\/code> object holds all data needed to execute a workflow, including all activities, input\nparameters and the closure activity. First, in a <code>try { }<\/code> block we iterate through all activities in the workflow and\nschedule each of them using the DTFx&#8217; <code>OrchestrationContext.ScheduleTask()<\/code> method. In the <code>finally { }<\/code> block we check\nif a closure activity was provided (by the way, it&#8217;s optional) and if so, we schedule it to execute on the same instance\nof the <code>OrchestrationContext<\/code>.<\/p>\n<h2>Summary<\/h2>\n<p>In this post we have shown how we built a workflow engine on top of DTFx and tailored it to our needs. We discussed\nthe implementation of the Domain Specific Language (DSL) with workflow validation, dynamic expressions and data flow,\nworkflow timeout\/cancellation and closure step. We hope this information was helpful to understand how DTFx works and\nhow you can build more features on top of it.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>In this post we write about how we implemented a custom workflow engine that supports DSL, workflow cancellation and closure on top of Durable Task Framework (DTFx), explaining the why behind our choices and discussing the challenges we faced.<\/p>\n","protected":false},"author":119705,"featured_media":14752,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[1],"tags":[3409,3397],"class_list":["post-14751","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-cse","tag-durable-task-framework","tag-workflow-engines"],"acf":[],"blog_post_summary":"<p>In this post we write about how we implemented a custom workflow engine that supports DSL, workflow cancellation and closure on top of Durable Task Framework (DTFx), explaining the why behind our choices and discussing the challenges we faced.<\/p>\n","_links":{"self":[{"href":"https:\/\/devblogs.microsoft.com\/ise\/wp-json\/wp\/v2\/posts\/14751","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/devblogs.microsoft.com\/ise\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/devblogs.microsoft.com\/ise\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/ise\/wp-json\/wp\/v2\/users\/119705"}],"replies":[{"embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/ise\/wp-json\/wp\/v2\/comments?post=14751"}],"version-history":[{"count":0,"href":"https:\/\/devblogs.microsoft.com\/ise\/wp-json\/wp\/v2\/posts\/14751\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/ise\/wp-json\/wp\/v2\/media\/14752"}],"wp:attachment":[{"href":"https:\/\/devblogs.microsoft.com\/ise\/wp-json\/wp\/v2\/media?parent=14751"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/ise\/wp-json\/wp\/v2\/categories?post=14751"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/ise\/wp-json\/wp\/v2\/tags?post=14751"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}