Introduction
In our recent engagement, we developed a Large Language Model (LLM) service using Azure Prompt Flow. Our objective was not only to fulfill the service requirements but also to enhance the developer experience. This article outlines the seven strategies we incorporated to achieve this goal.
1. Maintain Dev Container
Dev Containers offer a solution to the “it works on my machine” problem by creating a consistent and replicable development environment across different machines. This containerization of the development environment facilitates easy sharing and replication within teams, reducing bugs and ensuring smoother deployments. Visual Studio Code enhances this experience with Dev Container support, allowing environments to be defined in a devcontainer.json
file and run within a Docker container.
Image generated by OpenAI’s DALL-E model.
The devcontainer.json
can install the following tools to make your development streamlined
-
VS Code Extensions for enhanced productivity:
- prompt-flow.prompt-flow: Integration with Prompt Flow workflows.
- ms-python.python: Support for Python development.
- GitHub.copilot: AI-powered coding assistance.
-
Features to equip the container with necessary tools:
- Azure CLI with ML extension: For interacting with Azure services.
- Git: Essential version control, provided by the OS.
- GitHub CLI: For GitHub operations directly from the terminal.
- Node: For npm package installation
- Conda: A package and environment manager for any language.
-
Shell Scripts for environment preparation and connection setup:
- post-create.sh: Initializes the conda environment and installs Python packages before the container starts.
- post-start.sh: Activates the conda environment and establishes local connections for testing after the container starts
Here is an example of a devcontainer.json
file:
{
"name": "Ubuntu",
"image": "mcr.microsoft.com/devcontainers/base:ubuntu",
"hostRequirements": {
"cpus": 2
},
"mounts": [
"source=${localWorkspaceFolder},target=/workspace,type=bind,consistency=cached"
],
"containerEnv": {
"CONDA_ENV_NAME": "conda_env"
},
"customizations": {
"vscode": {
"extensions": [
"prompt-flow.prompt-flow",
"ms-python.python",
"GitHub.copilot"
],
"settings": {
"python.terminal.activateEnvironment": true,
"python.condaPath": "/opt/conda/bin/conda",
"python.defaultInterpreterPath": "/opt/conda/envs/${containerEnv:CONDA_ENV_NAME}",
"python.editor.formatOnSave": true,
"python.editor.codeActionsOnSave": {
"source.organizeImports": true
},
"notebook.formatOnSave.enabled": true,
"notebook.codeActionsOnSave": {
"source.organizeImports": true
}
}
}
},
"features": {
"ghcr.io/devcontainers/features/azure-cli": {
"version": "latest",
"extensions": "ml"
},
"ghcr.io/devcontainers/features/git": "os-provided",
"ghcr.io/devcontainers/features/github-cli": "latest",
"ghcr.io/devcontainers/features/node": "latest",
"ghcr.io/devcontainers/features/conda": "latest"
},
"onCreateCommand": "/bin/bash .devcontainer/post-create.sh",
"postStartCommand": "/bin/bash .devcontainer/post-start.sh"
}
Learn more about how to setup Dev Container by following official documentation.
2. Leverage Azure Key Vault
Automating the retrieval of secrets like API keys and base URLs into new development environments through keyvault streamlines the setup processes. This method eliminates the repetitive manual task of entering sensitive data into a .env file.
The common.sh
script showcases an effective application of this approach, providing reusable functions for keyvault secret extraction and environment variable setup. These capabilities enable smooth integration of Prompt Flow with services such as OpenAI or Azure AI Search, enhancing the efficiency and security of development environment configurations.
Image generated by OpenAI’s DALL-E model.
Here’s a closer look at how the common.sh
script can be utilized to enhance the efficiency development environment setup:
# Obtains a secret from keyvault using the provided secret name
get_secret() {
secret_name="$1"
echo "Retrieving $secret_name from key vault..." >&2
secret_value=$(az keyvault secret show --subscription "$AZURE_SUBSCRIPTION_ID" -n "$secret_name" --vault-name "$KEYVAULT_NAME" --query value -o tsv)
if [ $? -ne 0 ]; then
echo "Failed to retrieve $secret_name from keyvault" >&2
exit 1
fi
echo $secret_value
}
# Populates the configuration entries required to create PromptFlow connections
setup_pf_connection_variables() {
# Use the dev keyvault unless otherwise specified
AZURE_SUBSCRIPTION_ID="${AZURE_SUBSCRIPTION_ID:-"<subscription_id>"}"
KEYVAULT_NAME="${KEYVAULT_NAME:-"<keyvault_name>"}"
# Retrieve from key vault via get_secret() function, if not exported locally
AZURE_SEARCH_KEY="${AZURE_SEARCH_KEY:-$(get_secret "azure-search-key")}"
AZURE_SEARCH_ENDPOINT="${AZURE_SEARCH_ENDPOINT:-$(get_secret "azure-search-endpoint")}"
AZURE_OPENAI_API_KEY="${AZURE_OPENAI_API_KEY:-$(get_secret "openai-api-key")}"
AZURE_OPENAI_API_BASE="${AZURE_OPENAI_API_BASE:-$(get_secret "openai-endpoint")}"
}
You can access the full common.sh
script below.
#!/bin/bash
# common.sh: common functions and helpers for other scripts
set -euE
function error_trap {
echo
echo "*** script failed to execute to completion ***"
echo "An error occurred on line $1 of the script."
echo "Exit status of the last command was $2"
}
trap 'error_trap $LINENO $?' ERR
# Load environment variables from specified file (or .env), if exists
load_env_file() {
env_file="${1:-.env}"
if [ -f "$env_file" ]; then
export $(cat "$env_file" | sed 's/#.*//g' | xargs)
fi
}
# Obtains a secret from keyvault using the provided secret name
get_secret() {
secret_name="$1"
echo "Retrieving $secret_name from key vault..." >&2
secret_value=$(az keyvault secret show --subscription "$AZURE_SUBSCRIPTION_ID" -n "$secret_name" --vault-name "$KEYVAULT_NAME" --query value -o tsv)
if [ $? -ne 0 ]; then
echo "Failed to retrieve $secret_name from keyvault" >&2
exit 1
fi
echo $secret_value
}
# Populates the configuration entries required to create PromptFlow connections
setup_pf_connection_variables() {
# Use the dev keyvault unless otherwise specified
AZURE_SUBSCRIPTION_ID="${AZURE_SUBSCRIPTION_ID:-"<subscription_id>"}"
KEYVAULT_NAME="${KEYVAULT_NAME:-"<keyvault_name>"}"
# Retrieve from key vault via get_secret() function, if not exported locally
AZURE_SEARCH_KEY="${AZURE_SEARCH_KEY:-$(get_secret "azure-search-key")}"
AZURE_SEARCH_ENDPOINT="${AZURE_SEARCH_ENDPOINT:-$(get_secret "azure-search-endpoint")}"
AZURE_OPENAI_API_KEY="${AZURE_OPENAI_API_KEY:-$(get_secret "openai-api-key")}"
AZURE_OPENAI_API_BASE="${AZURE_OPENAI_API_BASE:-$(get_secret "openai-endpoint")}"
}
Discover more Azure CLI Key Vault commands by following official documentation.
3. Automate Local Developer Setup
When working with Prompt Flow locally, establishing a connection to services such as OpenAI or Azure AI Search is necessary. This can be done manually via the Prompt Flow VS Code extension, through CLI commands, or more efficiently with shell scripts.
Image generated by OpenAI’s DALL-E model.
Here is the script local-create-connection.sh
that shows how you can rapidly create local connection to OpenAI and Azure AI Search.
#!/bin/bash
set -euE
# Set the SCRIPT_PATH variable to the directory where the current script is located.
# "$0" is a reference to the current script, "readlink -f" resolves its absolute file path,
# and "dirname" extracts the directory part of this path.
SCRIPT_PATH="$(dirname "$(readlink -f "$0")")"
# Source the common.sh script located in the same directory as this script.
# Sourcing a script allows us to use its functions and variables in the current script.
. $SCRIPT_PATH/common.sh
# Load any declared variables from .env file at repo root
load_env_file "$SCRIPT_PATH/../.env"
# Setup the variables used for PromptFlow connections below
# Call to setup_pf_connection_variables function in common.sh
setup_pf_connection_variables
# create connection to openai
pf connection create -f "${SCRIPT_PATH}/../src/connections/azure_openai.template.yaml" \
--set api_key="${AZURE_OPENAI_API_KEY}" \
--set api_base="${AZURE_OPENAI_API_BASE}"
# create connection to azure search
pf connection create -f "${SCRIPT_PATH}/../src/connections/azure_ai_search.template.yaml" \
--set api_key="${AZURE_SEARCH_KEY}" \
--set api_base="${AZURE_SEARCH_ENDPOINT}"
Learn more about managing local connections
from official prompt flow documentation.
4. Automate Cloud Infrastructure Setup
Below, the azure-create-connection.sh
script illustrates how to rapidly set up connections to OpenAI and Azure AI Search on a cloud instance of Prompt Flow.
Image generated by OpenAI’s DALL-E model.
Dive deep into azure-create-connection.sh
.
#!/bin/bash
set -euE
# Set the SCRIPT_PATH variable to the directory where the current script is located.
# "$0" is a reference to the current script, "readlink -f" resolves its absolute file path,
# and "dirname" extracts the directory part of this path.
SCRIPT_PATH="$(dirname "$(readlink -f "$0")")"
# Source the common.sh script located in the same directory as this script.
# Sourcing a script allows us to use its functions and variables in the current script.
. $SCRIPT_PATH/common.sh
# Load any declared variables from .env file at repo root
load_env_file "$SCRIPT_PATH/../.env"
# Setup the variables used for PromptFlow connections below
# Call to setup_pf_connection_variables function in common.sh
setup_pf_connection_variables
# Get Azure access token for POST requests
ACCESS_TOKEN="$(az account get-access-token --query accessToken -o tsv)"
url_base_create_connection="https://ml.azure.com/api/${LOCATION}/flow/api/subscriptions/${AZURE_SUBSCRIPTION_ID}/resourceGroups/${AZURE_RESOURCE_GROUP_NAME}/providers/Microsoft.MachineLearningServices/workspaces/${WORKSPACE_NAME}/Connections"
# Create Azure OpenAI connection
url_create_azure_open_ai_connection="${url_base_create_connection}/${AZURE_OPENAI_CONNECTION_NAME}?asyncCall=true"
echo "Creating Azure OpenAI connection with name: $AZURE_OPENAI_CONNECTION_NAME"
curl -s --request POST --fail \
--url "$url_create_azure_open_ai_connection" \
--header "Authorization: Bearer $ACCESS_TOKEN" \
--header 'Content-Type: application/json' \
-d @- <<EOF
{
"connectionType": "AzureOpenAI",
"configs": {
"api_key": "$AZURE_OPENAI_API_KEY",
"api_base": "$AZURE_OPENAI_API_BASE",
"api_type": "azure",
"api_version": "$API_VERSION",
"resource_id": "/subscriptions/${AZURE_SUBSCRIPTION_ID}/resourceGroups/${AZURE_RESOURCE_GROUP_NAME}/providers/Microsoft.CognitiveServices/accounts/${AZURE_OPEN_AI_RESOURCE_NAME}"
}
}
EOF
echo -e "\n"
# Create Azure AI Search connection
url_create_azure_ai_search_connection="${url_base_create_connection}/${AZURE_AI_SEARCH_CONNECTION_NAME}?asyncCall=true"
echo "Creating Azure AI Search connection with name: $AZURE_AI_SEARCH_CONNECTION_NAME"
curl -s --request POST --fail \
--url "$url_create_azure_ai_search_connection" \
--header "Authorization: Bearer $ACCESS_TOKEN" \
--header 'Content-Type: application/json' \
-d @- <<EOF
{
"connectionType": "CognitiveSearch",
"configs": {
"api_key": "$AZURE_SEARCH_KEY",
"api_base": "$AZURE_SEARCH_ENDPOINT",
"api_version": "2023-07-01-Preview"
}
}
EOF
echo -e "\n"
echo "Done!"
Note: Keep in mind that future updates may introduce new CLI commands, eliminating the need for direct curl requests. For the latest pfazure commands and how to interact with Prompt Flow via CLI, consult the official documentation.
5. Combine Automation Scripts and Dev Container
After developing scripts from steps 2 and step 3, incorporate them into your Dev Container to streamline development environment setup. This procedure kicks off with the creation of a conda environment and an Azure login, followed by the execution of a script (local-create-connection.sh
) from step 3 that creates connections to OpenAI and Azure AI search through the retrieval of API keys and base URLs from the keyvault, utilizing the Azure session for authentication. This setup paves the way for local testing within your Prompt Flow VS Code extension, promoting an efficient and streamlined development process.
Image generated by OpenAI’s DALL-E model.
Below is the script (post-start.sh
) that encapsulates this workflow, ensuring your Dev Container is primed for productive development right from the start:
#!/bin/env bash
set -eux
# Check if already initialized
INITIALIZED_MARKER=~/.devcontainer-initialized
[ -f "$INITIALIZED_MARKER" ] && exit 0
# Don't activate 'base' environment
# VS Code activates its selected conda environment automatically
conda config --set auto_activate_base false
# Setup PromptFlow connections
az login
conda run -n "$CONDA_ENV_NAME" --live-stream ./local-create-connections.sh
# Mark this container as initialized
touch "$INITIALIZED_MARKER"
6. Automate E2E Flow Run Locally
Execution of local flow runs, particularly when dealing with a sequence of standard and multiple evaluation flows, can become cumbersome and inefficient if done manually. The complexity increases when each evaluation flow depends on the output of the preceding flow. The script presented here addresses this challenge by automating the process, where the output from the standard_flow_01
is passed as input to the evaluation_flow_01
, and similarly, the output of the first evaluation flow serves as input for the evaluation_flow_02
. This automation significantly speeds up the execution of these interconnected flows.
Image generated by OpenAI’s DALL-E model.
The script (local-run-with-evaluations.sh
) below outlines the entire automated process, starting from setting up the environment and loading variables, to executing the standard and evaluation flows in sequence, and finally, logging the run names for reference. This method not only saves time but also reduces manual errors, ensuring a smooth and efficient end-to-end flow execution.
#!/bin/bash
set -euE
# Set the SCRIPT_PATH variable to the directory where the current script is located.
# "$0" is a reference to the current script, "readlink -f" resolves its absolute file path,
# and "dirname" extracts the directory part of this path.
SCRIPT_PATH="$(dirname "$(readlink -f "$0")")"
# Source the common.sh script located in the same directory as this script.
# Sourcing a script allows us to use its functions and variables in the current script.
. $SCRIPT_PATH/common.sh
# Load any declared variables from .env file at repo root
load_env_file "$SCRIPT_PATH/../.env"
# Default values
experiment_name="local-run"
story_id="STORY-ID-NNNNN"
# Flow locations
standard_flow_01="${SCRIPT_PATH}/../src/flows/standard_flow_01"
evaluation_flow_01="${SCRIPT_PATH}/../src/flows/evaluation_flow_01"
evaluation_flow_02="${SCRIPT_PATH}/../src/flows/evaluation_flow_02"
# This path to dataset which is relative to flow's run.yml
data_path="../../../../data/tests/app-reference-dev-prompt.jsonl"
# If experiment_name is not provided as an argument, use the current git branch
if [ -z "$experiment_name" ]; then
experiment_name=$(git rev-parse --abbrev-ref HEAD)
fi
# Run names
current_time="$(date +%Y%m%d%H%M%S)"
run_name_prefix="${story_id}_${experiment_name}_${current_time}"
standard_flow_run_name="${run_name_prefix}-standard_flow"
evaluation_flow_1_run_name="${run_name_prefix}-evaluation_flow_01"
evaluation_flow_2_run_name="${run_name_prefix}-evaluation_flow_02"
# Step 1: Create a run for Standard Flow
echo "Step 1: Standard Flow - $standard_flow_run_name"
pf run create --stream -n "$standard_flow_run_name" \
-f "${standard_flow_01}/run.yml" \
--data "$data_path"
# Step 2: Create a run for Evaluation Flow 1
echo "Step 2: Evaluation Flow 1 - $evaluation_flow_1_run_name"
pf run create --stream -n "$evaluation_flow_1_run_name" \
-f "${evaluation_flow_01}/run.yml" \
--data "$data_path" \
# Reference to the standard flow from step 1
--run "$standard_flow_run_name"
# Step 3: Create a run for Evaluation Flow 2
echo "Step 3: Evaluation Flow 2 - $evaluation_flow_2_run_name"
pf run create --stream -n "$evaluation_flow_2_run_name" \
-f "${evaluation_flow_02}/run.yml" \
--data "$data_path" \
# Reference to the evaluation flow from step 2
--run "$evaluation_flow_1_run_name"
# Write run names to current experiment file
tee "${SCRIPT_PATH}/../src/.current_experiment.json" <<EOF
{
"name": "${experiment_name}",
"flows": {
"standard_flow_01": "${standard_flow_run_name}",
"evaluation_flow_01": "${evaluation_flow_1_run_name}",
"evaluation_flow_02": "${evaluation_flow_2_run_name}"
}
}
EOF
echo "Done"
Learn more about running flows locally
by following official prompt flow documentation.
7. Automate E2E Flow Run on Azure
This step extends the automation to Azure, allowing you to sequence flows similarly to step 6, but within the Azure environment. The script below mirrors the local automation, adapting it for Azure’s infrastructure, ensuring a seamless transition of your workflows to the cloud.
Image generated by OpenAI’s DALL-E model.
Dive deep into azure-run-flows-with-evaluations.sh
.
#!/bin/bash
set -euE
# Set the SCRIPT_PATH variable to the directory where the current script is located.
# "$0" is a reference to the current script, "readlink -f" resolves its absolute file path,
# and "dirname" extracts the directory part of this path.
SCRIPT_PATH="$(dirname "$(readlink -f "$0")")"
# Source the common.sh script located in the same directory as this script.
# Sourcing a script allows us to use its functions and variables in the current script.
. $SCRIPT_PATH/common.sh
# Load any declared variables from .env file at repo root
load_env_file "$SCRIPT_PATH/../.env"
# Default values
experiment_name="azure-run"
story_id="STORY-ID-NNNNN"
# Flow locations
standard_flow_01="${SCRIPT_PATH}/../src/flows/standard_flow_01"
evaluation_flow_01="${SCRIPT_PATH}/../src/flows/evaluation_flow_01"
evaluation_flow_02="${SCRIPT_PATH}/../src/flows/evaluation_flow_02"
# This path to dataset which is relative to flow's run.yml
data_path="../../../../data/tests/app-reference-dev-prompt.jsonl"
# If experiment_name is not provided as an argument, use the current git branch
if [ -z "$experiment_name" ]; then
experiment_name=$(git rev-parse --abbrev-ref HEAD)
fi
# Run names
current_time="$(date +%Y%m%d%H%M%S)"
run_name_prefix="${story_id}_${experiment_name}_${current_time}"
standard_flow_run_name="${run_name_prefix}-standard_flow"
evaluation_flow_1_run_name="${run_name_prefix}-evaluation_flow_01"
evaluation_flow_2_run_name="${run_name_prefix}-evaluation_flow_02"
# Step 1: Create a run for Standard Flow
echo "Step 1: Standard Flow - $standard_flow_run_name"
pfazure run create --stream -n "$standard_flow_run_name" \
--subscription "$AZURE_SUBSCRIPTION_ID" \
-g "$AZURE_RESOURCE_GROUP_NAME" \
-w "$WORKSPACE_NAME" \
-f "${standard_flow_01}/run.yml" \
--data "$data_path"
# Step 3: Create a run for Evaluation Flow 01
echo "Step 2: Evaluation Flow 01 - $evaluation_flow_1_run_name"
pfazure run create --stream -n "$evaluation_flow_1_run_name" \
--subscription "$AZURE_SUBSCRIPTION_ID" \
-g "$AZURE_RESOURCE_GROUP_NAME" \
-w "$WORKSPACE_NAME" \
-f "${evaluation_flow_01}/run.yml" \
--data "$data_path" \
# Reference to the standard flow from step 1
--run "$standard_flow_run_name"
# Step 3: Create a run for Evaluation Flow 02
echo "Step 3: Evaluation Flow 02 - $evaluation_flow_2_run_name"
pfazure run create --stream -n "$evaluation_flow_2_run_name" \
--subscription "$AZURE_SUBSCRIPTION_ID" \
-g "$AZURE_RESOURCE_GROUP_NAME" \
-w "$WORKSPACE_NAME" \
-f "${evaluation_flow_02}/run.yml" \
--data "$data_path" \
# Reference to the evaluation flow from step 2
--run "$evaluation_flow_1_run_name"
# Write run names to current experiment file
tee "${SCRIPT_PATH}/../src/.current_experiment.json" <<EOF
{
"name": "${experiment_name}",
"flows": {
"standard_flow_01": "${standard_flow_run_name}",
"evaluation_flow_01": "${evaluation_flow_1_run_name}",
"evaluation_flow_02": "${evaluation_flow_2_run_name}"
}
}
EOF
echo "Done"
Learn more about running flows on Azure
by following official Prompt Flow documentation.
Conclusion
This blog post showcases efficiency in Prompt Flow development by highlighting key strategies like leveraging Dev Containers for consistent environments, utilizing Azure Key Vault for secure secret management, and incorporating automation scripts to streamline setup and execution processes. Together, these reusable patterns form a comprehensive blueprint, empowering developers to refine and elevate their team’s workflow.