In this post, App Dev Manager Keith Anderson explores how Azure Key Vault can be used to secure access to Azure storage operations.
In the beginning…
As I began to learn about Microsoft’s cloud a couple of years ago, I began to realize that some services had been around since its inception and were fundamental to the way things worked, with other services built on top. Azure Storage is certainly one of those fundamental services that can be used as building blocks in your own applications. To any organization leveraging Azure Storage, it seems to me that, like with any tool or construction material, you can make mistakes if you aren’t careful, and those mistakes can potentially be dangerous to whatever you are creating or doing.
What is a software architect anyway?
I was once hired to be the first software architect in an organization where I had already been working as a developer. Prior to that point, the role did not formally exist in my organization, though many of the architectural concerns where handled in a decentralized way, by one or more individuals throughout my tenure there. After I accepted the role, I had to help define the role. After thinking about it for a while and doing a bit of research, I settled on a definition that centered around making high level decisions about technologies and tools so that developers could focus on implementation without having to reinvent the wheel, such as deciding which language or RDBMS to use. It also meant controlling and standardizing the way those technologies and tools could be used. Taken to its logical conclusion, to me this meant owning most if not all of the cross-cutting concerns, including database access, identity and security (AuthN/AuthZ), and SOA-level event publication/subscription.
Storage as a cross-cutting concern
Azure Storage is a tool that fits into the definition of one of these cross-cutting concerns. You could leave it up to the developer how they use it, but in doing so you would leave your organization open to security flaws that could potentially be costly and dangerous. Case in point: If you have ever downloaded and looked at one of the storage code samples, you would see something like the following:
// Create the CloudBlobClient that is used to call the Blob Service for that storage account. CloudBlobClient cloudBlobClient = storageAccount.CreateCloudBlobClient(); // Or Create the TableBlobClient or Queue or whatever CloudTableClient tableClient = storageAccount.CreateCloudTableClient();
Then elsewhere in the documentation, you will find a warning to never share your storage key, which is exactly what you are getting from CloudConfigurationManager.GetSetting(“StorageConnectionString”). We tell you this over and over again. Never share your keys and secrets. So, will your developers know not to do this after downloading the samples or will they just do what they need to do to get it working?
What is the danger?
So, what’s the big deal anyway? Why shouldn’t you pass around your storage key to all of your applications and developers? The typical answer is that if it gets into the hands of someone you don’t trust, you’ll have to change it. Then you’ll have to update every single application configured to use it. Sometimes this means deploying the entire application, or at least updating the configuration. The other danger is that with the access key, you can do anything, including delete all of your data. There isn’t any concept of Role Based Access Control and granting permissions for only what you need. So, how do you protect your data and give developers and applications access to do only what they need to do and only when they need to do it?
Can Shared Access Signatures help?
One answer may be to create a SAS key, make sure it has just enough rights to perform the action required, and give it an expiration so that it can’t be used indefinitely. SAS Keys can be used to instantiate Cloud{type}Clients just like a CloudStorageAccount object.
// creating a shared access policy that expires in 30 minutes. // No start time is specified, which means that the token is valid immediately. // The policy specifies full permissions. SharedAccessTablePolicy policy = new SharedAccessTablePolicy(){ SharedAccessExpiryTime = DateTime.UtcNow.AddMinutes( SasProducer.AccessPolicyDurationInMinutes), Permissions = SharedAccessTablePermissions.Add | SharedAccessTablePermissions.Query | SharedAccessTablePermissions.Update | SharedAccessTablePermissions.Delete }; // Generate the SAS token. No access policy identifier is used which // makes it a non-revocable token // limiting the table SAS access to only the request customer's id string sasToken = cloudTable.GetSharedAccessSignature( policy /* access policy */, null /* access policy identifier */, customerId /* start partition key */, null /* start row key */, customerId /* end partition key */, null /* end row key */); string sasToken = this.addressBookService.RequestSasToken(this.customerId); // Create credentials using the new token. StorageCredentials credentials = new StorageCredentialsSharedAccessSignature(sasToken); CloudTableClient tableClient = new CloudTableClient(tableEndpoint, credentials);
Wait a minute though. That cloudTable object we’re using to call GetSharedAccessSignature() came from somewhere. In order to instantiate a CloudTable object, you need the storage access key, so it seems we’re back to square one with all of our applications needing to know our storage access key.
Isolation and control behind a service
Well, not quite. What if we cut the code in half and isolate the part to create the SAS token behind a service? That way, we share our Storage Access Key with only one application, and that service is responsible for producing and distributing SAS tokens to all of your other clients. The access to storage becomes a maintainable cross-cutting concern service that can be maintained and controlled by the architect and used by the rest of either the organization in an enterprise scenario, or products in a software-as-a-service scenario, as a building block component.
That is exactly the best practice written about in detail by the storage account team back in 2012 in this excellent blog post. Follow this blog to create your very own storage service.
Once implemented, if your key is ever compromised or rotated according to your maintenance schedule, you only need to change and redeploy in one place. You also limit the attack surface for your key ever being compromised in the first place.
Can Key Vault help?
Yes! The storage account keys feature of Key Vault is now in public preview. This service can do many of the things discussed above and a lot more.
https://docs.microsoft.com/en-us/azure/key-vault/key-vault-ovw-storage-keys
// Create KeyVaultClient with vault credentials var kv = new KeyVaultClient(new KeyVaultClient.AuthenticationCallback(securityToken)); // Get a SAS token for our storage from Key Vault var sasToken = await kv.GetSecretAsync("SecretUri"); // Create new storage credentials using the SAS token. var accountSasCredential = new StorageCredentials(sasToken.Value); // Use the storage credentials and the Blob storage endpoint to create a new Blob service client. var accountWithSas = new CloudStorageAccount(accountSasCredential, new Uri ("https://myaccount.blob.core.windows.net/"), null, null, null); var blobClientWithSas = accountWithSas.CreateCloudBlobClient(); // Use the blobClientWithSas ... // If your SAS token is about to expire, get the SAS Token again from Key Vault and update it. sasToken = await kv.GetSecretAsync("SecretUri"); accountSasCredential.UpdateSASToken(sasToken);
In this scenario, Key Vault becomes your protective service, brokering access to your storage services, and managing your key vault keys as well, never exposing them to clients and rotating them regularly, as a matter of best practice.
The actual secret stored in Key Vault is an account SAS URI that can be used to generate the various storage client objects. As such, it cannot be used with SAS policies at this time. Stored Access Policies are defined on resource containers, rather than at the account level, and signatures based on a policy can be revoked when the policy is revoked. You cannot do that with ad-hoc account SAS at this time. If you want the added security of being able to revoke access once given and before expiration, you should create your own storage access service.
Wrapup
Like most Azure services, it is easy to get up and running, but understanding how to operationalize it in a production environment is more complex. You have to take into account the entire landscape of scale and security and the storage service is no different. Azure storage is an extremely useful tool and building block for a myriad of uses. Getting security right for it is something any architect will want to devote some thought and energy towards. The key takeaway from this should be, limit exposure of your storage access key and the best way to do this is to protect it behind a service.
Premier Support for Developers provides strategic technology guidance, critical support coverage, and a range of essential services to help teams optimize development lifecycles and improve software quality. Contact your Application Development Manager (ADM) or email us to learn more about what we can do for you.
0 comments