April 23rd, 2025

Azure SDK modularized libraries for JavaScript

Qiaoqiao Zhang
Senior Software Engineer

Previously, we introduced Azure REST libraries for JavaScript, which are optimized for browser user experience and bundle size. They offer a more cost-effective and efficient approach to production and close alignment with the real API surface. These libraries are ideal for customers who prefer direct abstraction and have stringent requirements for bundle size. However, for customers who are accustomed to the traditional client experience, REST-level clients (RLCs) have their limitations as they aren’t user-friendly enough. But the bundle size is a concern for many browser-based applications customers because our traditional clients aren’t designed to easily do tree-shaking.

Modularized design

The limitations of our traditional clients and RLCs drove us to explore new solutions. We’re developing new Azure modularized libraries that are built on top of the RLC and modularize each API call. They offer great usability, including both consistency and flexibility at reduced bundle size while minimizing disruptions for traditional client customers.

Subpath exports

To achieve our modularized libraries design goal, we apply subpath exports, available since Node.js version 12.7, to offer tailored experiences for various customer scenarios.

This approach leads to our modularized libraries’ multi-layer design:

  • Modular API Layer: Modularizes each API call, allowing customers to import only what they need. It also handles serialization for requests and deserialization for responses to and from the REST layer.
  • Service Client Layer: Provides the same user experience as traditional clients. It includes a convenience layer, the ServiceClient, which is built on top of the underlying API layer.

We can also utilize subpath exports to support other modularized scenarios.

Sub clients

One example is that, in multiple parallel or hierarchical sub clients, each sub client subpath export only contains APIs that are related to this sub client. It allows users to select and integrate only the components relevant to their specific sub client scenarios, optimizing resource utilization, and enhancing customization.

Experimental features

Another benefit is that we can preview experimental service features to get customers earlier feedback by placing these features inside a new experimental subpath export. This feature allows customers to try out new functionality while having a clear picture of which APIs are stable.

Opt-in helpers

In our new modularized libraries, an important design choice is to delegate as much as possible from core libraries to the client side but provide opt-in helpers to reduce the unnecessary burden on the runtime. For example, in our traditional clients, we depend heavily on core packages for centralized serialization and deserialization, which is the main reason we can’t easily do tree-shaking. In our new modularized libraries, we move the model serialization and deserialization from the core libraries to the client side as opt-in helpers. Opt-in helpers are provided for paging and long-running operations to improve the customer user experience. In this way, customers don’t have to pay the overhead cost for central handling of unused features.

ECMAScript Modules vs. CommonJS

Traditional clients only support bundling our package as a CommonJS module. However, we now use tshy in bundling our modularized libraries into both ECMAScript Modules (ESM) and CommonJS (CJS) formats. This tool ensures customers work seamlessly with both ESM and CJS modules in their experience with Azure SDK modularized libraries for JavaScript.

Bundle size optimization

In our modularized libraries, different layers have different bundle size improvements. The Network Analytics management plane library @azure/arm-networkanalytics is used to demonstrate the improvement.

A dataProduct resource is created with the traditional client, and service client layer of the modularized library, and modular API layer, RLC. Then vite is used to bundle each of them separately.

The bundle size result and optimized percentage compare to the traditional client for each client type:

Client Type Bundle Size Optimized Percentage
Traditional Client 124.64 KB N/A
Service Client Layer 91.39 KB 26.68%
Modular Layer 67.97 KB 45.47%
RLC 48.23 KB 61.30%

From this table, we can conclude that compared with:

  • The traditional client, the modular layer could provide a better bundle size improvement and even the service client layer could have a significant improvement.
  • RLC, the modular layer could provide a better user experience at the cost of reasonable bundle size increases.

User experience in modularized libraries

The modularized library for OpenAI is used as an example to show the user experience in RLC. Each layer of modularized libraries would use the chatCompletions API.

  • We can use AzureKeyCredential from @azure/core-auth for authentication:
    import { AzureKeyCredential } from "@azure/core-auth"
    const credential = new AzureKeyCredential(azureApiKey);
  • We need to provide an endpoint and azureApiKey for this scenario:
    const endpoint = process.env['ENDPOINT'] || '';
    const azureApiKey = process.env['AZURE_API_KEY'] || '';
  • Choose the model from the available model list. In this case, we choose gpt-35-turbo-1106:
    const deploymentName = 'gpt-35-turbo-1106';
  • The conversation message and available function we want to send:
    const messages = [{ role: 'user', content: 'What is the weather like in Boston?' }];
    const tools = [
    {
      type: 'function',
      function: {
        name: 'get_current_weather',
        description: 'Get the current weather in a given location',
        parameters: {
          type: 'object',
          properties: {
            location: {
              type: 'string',
              description: 'The city and state, e.g. San Francisco, CA',
            },
            unit: {
              type: 'string',
              enum: ['celsius', 'fahrenheit'],
            },
          },
          required: ['location'],
        },
      },
    },
    ];

RLC

As shown in the example, the RLC user experience is more oriented towards REST API calls. For instance, we need to specify the URL path and the HTTP method and need to be explicit about what parts go to the request body, etc.

import createOpenAIContext from '@azure-rest/openai';

const client = createOpenAIContext(endpoint, credential);

const result = await client
  .path('/deployments/{deploymentId}/chat/completions', deploymentName)
  .post({
     body: {
      messages,
      tools,
     },
  });

We can also use the opt-in isUnexpected helper to handle the unexpected result:

if (isUnexpected(result)) {
  throw createRestError(result.body);
}

Modular API layer

As shown in the example, we provide modularized API calls getChatCompletions from the ./api subpath, which hides the REST API details from customers. Required parameters are projected as positional parameters, whereas optional parameters are grouped into an options bag. This approach has an extra client context parameter that encapsulates the common state shared across operations.

import { getChatCompletions, createOpenAIContext } from "@azure/openai/api";

const context = createOpenAIContext(endpoint, credential);
const result = await getChatCompletions(context, deploymentName, messages, { tools });

Service client layer

As shown in the example, the client layer user experience is similar to the traditional client. All methods belong to a client class that organizes operations.

import { OpenAIClient } from "@azure/openai";

const client = new OpenAIClient(endpoint, credential);
const result = await client.getChatCompletions(deploymentName, messages, { tools });

Features for modularized libraries

In addition to the bundle size and user experience in different layers, there’s support for new features like complex hierarchies and models-based serialization and deserialization. Pagination and long-running operations features have also been redesigned for improved developer experience.

Complex hierarchies

In the traditional client, we only support a single client per package and export all models at the top level. While in modularized libraries, complex client/models hierarchies have nicer support with subpath export. The idea is to wrap up an independent sub path for each sub client or model namespace and then organize the subpath exports so that they’re consistent with service conceptual business logic.

Parallel sub clients

In this case, those multiple sub clients are relatively independent and equally important. If customers only need to focus on one sub client, they could just import everything from that sub client.

// For customers who focus on LoadTestRun resources related apis.
import { LoadTestRunClient } from "@azure/loadtesting/loadTestRun";
const loadTestRunClient = new LoadTestRunClient();
loadTestRunClient.getTestRun();

// For customers who focus on TestProfileAdministration resources related apis.
import { TestProfileAdministrationClient } from "@azure/loadtesting/testProfileAdministration";
const profileAdminClient = new TestProfileAdministrationClient();
profileAdminClient.getTestProfile();

Hierarchical sub clients

In this case, there are clear hierarchies between different sub clients, which could better reflect their business conceptual models and patterns. And it’s useful for services with clear boundaries among different kinds of customer scenarios.

// for customers who focus on Storage Container resources, they can create a StorageClient first, then get a StorageContainerClient from that StorageClient
import { StorageClient } from "@azure/storage";
const storageClient = new storageClient(accountName);
const storageContainerClient = storageClient.getContainerClient(containerId);
storageContainerClient.upload();

// or they could import from ./container subpath but provide the StorageClient parameters when create StorageContainerClient.
import { StorageContainerClient } from "@azure/storage/container";
const storageContainerClient = new StorageContainerClient(accountName, containerId);
storageContainerClient.upload();

Model namespaces hierarchies

In some model-tensed applications, there are clear boundaries amongst models that depend on customers user scenarios. In modularized libraries, we support to organize those models with subpath hierarchies to align with namespace hierarchies definition so that different customers could focus on the models they need. For example, given the following namespace definition:

namespace Chat {
  model ChatRequestMessage { ... }
  model ChatResponseMessage { ... }  
  namespace Completion {
    model CompletionRequest { ... }
    model CompletionResponse { ... }     
  }
  namespace Embedding {
    model EmbeddingRequest { ... }
    model EmbeddingResponse { ... }       
  }
}

We provide experiences to allow customers to only import models in a specific namespace if their applications only focus on that namespace.

import { ChatRequestMessage, ChatResponseMessage } from "@azure/openai/models/chat";

import { CompletionRequest, CompletionResponse } from "@azure/openai/models/chat/completion";

import { EmbeddingRequest, EmbeddingResponse } from "@azure/openai/models/chat/embedding";

Models-based serialization and deserialization

As mentioned in the opt-in helpers’ section, serialization and deserialization were implemented in the core library. Consequently, we must consider the most complex cases and are unable to do the tree-shaking with this part even with a simple scenario. In our new modularized libraries, we decentralize the serialization and deserialization logic into the modularized client library side per each model to support tree-shaking.

For example, assume there are two main customer scenarios. One scenario is related to the Window resource type, which is a simple model with several properties. The other scenario is related to the Extension resource type, which is a complex model with recursive references.

interface Window {
  width: string;
  length: number;
}

interface Extension extends Element {
  level: number;
}

interface Element {
  extension?: Extension[];
}

For the preceding Window and Extension models, we generate the following serializers.

function windowSerializer(obj: Window): any {
  return {
    width: obj.width;
    length: obj.length;      
  }  
}

export function extensionSerializer(item: Extension): any {
  return {
    extension: !item["extension"]
      ? item["extension"]
      : extensionArraySerializer(item["extension"]),
    level: item["level"],
  };
}

export function extensionArraySerializer(result: Array<Extension>): any[] {
  return result.map((item) => {
    return extensionSerializer(item);
  });
}

export function elementSerializer(item: Element): any {
  return {
    extension: !item["extension"]
      ? item["extension"]
      : extensionArraySerializer(item["extension"]),
  };
}

If a group of customers only needs to deal with the Window model, they don’t have to keep the complex Extension model serialization logic in their bundle. This design could greatly help to optimize their application.

Pagination

The pagination experience is similar to the traditional client, but with a few improvements to better align with the general Azure SDK guidelines of pagination. If customers don’t want to pay attention to the pagination details, they just want to get the data and process it. Here’s the recommended usage:

const client = new ServiceClient()
for await (const item of client.listItems()) {
    handleItem(item);
}

For more granular control of the pagination, use the byPage operation and provide the continuationToken received from previous page. For example:

// usage of continuationToken with byPage
const previousPage = await client.listItems().byPage().next();
const continuationToken = previousPage.value.continuationToken
for await (const page of client.listItems().byPage({ continuationToken })) {
    handleItem(page);
}

Long-running operation

There’s also something new in our new long-running operation design and implementation.

User experience

Previously, in our traditional clients, we generated two operations (beginDoSth and beginDoSthAndWait) for each of the long-running operations, which are both redundant and confusing to our customers. The beginDoSth operation sends the initial request and returns a poller. The customer could have some other control logic for how they want to poll until it finishes. And the beginDoSthAndWait operation helps customers do the polling in the backend until it finishes and return the result of this polling operation.

// wait for the final result without caring about the polling
const result = await beginDoSthAndWait();

// to have more control over the polling
const poller = await beginDoSth();
const result = poller.pollUntilDone();

In the new design, there’s just one operation (doSth) for each of the long-running operations. Customers can choose to call it in an asynchronous or synchronized fashion. If it’s called asynchronously, a poller is returned. The poller can be used to initialize the request and have some other control logic of this polling operation. If customers use it in a synchronized way, since await implicitly conveys the idea of waiting for the response, we do the polling until it finishes and return the final results.

// wait for the final result without caring about the polling
const result = await doSth(); 

// to have more control over the polling
const poller = doSth();
await poller.submitted();
const result = await poller // or const result = await poller.pollUntilDone();

Summary

In conclusion, our new modularized libraries for JavaScript represent a significant advancement in how developers interact with Azure services via JavaScript. By applying subpath exports, opt-in helpers, and supporting both ESM and CommonJS modules, we offer a tailored and consistent user experience that caters to diverse customer needs. This modular approach not only enhances flexibility and efficiency but also optimizes performance for applications using Azure SDK for JavaScript libraries to interact with services. These improvements provide developers with the tools they need to build more robust and performant applications.

Author

Qiaoqiao Zhang
Senior Software Engineer

0 comments