{% hint style="success" %}
Learn & practice AWS Hacking:HackTricks Training AWS Red Team Expert (ARTE)
Learn & practice GCP Hacking: HackTricks Training GCP Red Team Expert (GRTE)
Support HackTricks
- Check the subscription plans!
- Join the 💬 Discord group or the telegram group or follow us on Twitter 🐦 @hacktricks_live.
- Share hacking tricks by submitting PRs to the HackTricks and HackTricks Cloud github repos.
Azure Storage Accounts are fundamental services in Microsoft Azure that provide scalable, secure, and highly available cloud storage for various data types, including blobs (binary large objects), files, queues, and tables. They serve as containers that group these different storage services together under a single namespace for easy management.
Main configuration options:
- Every storage account must have a uniq name across all Azure.
- Every storage account is deployed in a region or in an Azure extended zone
- It's possible to select the premium version of the storage account for better performance
- It's possible to select among 4 types of redundancy to protect against rack, drive and datacenter failures.
Security configuration options:
- Require secure transfer for REST API operations: Require TLS in any communication with the storage
- Allows enabling anonymous access on individual containers: If not, it won't be possible to enable anonymous access in the future
- Enable storage account key access: If not, access with Shared Keys will be forbidden
- Minimum TLS version
- Permitted scope for copy operations: Allow from any storage account, from any storage account from the same Entra tenant or from storage account with private endpoints in the same virtual network.
Blob Storage options:
- Allow cross-tenant replication
- Access tier: Hot (frequently access data), Cool and Cold (rarely accessed data)
Networking options:
- Network access:
- Allow from all networks
- Allow from selected virtual networks and IP addresses
- Disable public access and use private access
- Private endpoints: It allows a private connection to the storage account from a virtual network
Data protection options:
- Point-in-time restore for containers: Allows to restore containers to an earlier state
- It requires versioning, change feed, and blob soft delete to be enabled.
- Enable soft delete for blobs: It enables a retention period in days for deleted blobs (even overwritten)
- Enable soft delete for containers: It enables a retention period in days for deleted containers
- Enable soft delete for file shares: It enables a retention period in days for deleted file shared
- Enable versioning for blobs: Maintain previous versions of your blobs
- Enable blob change feed: Keep logs of create, modification, and delete changes to blobs
- Enable version-level immutability support: Allows you to set time-based retention policy on the account-level that will apply to all blob versions.
- Version-level immutability support and point-in-time restore for containers cannot be enabled simultaneously.
Encryption configuration options:
- Encryption type: It's possible to use Microsoft-managed keys (MMK) or Customer-managed keys (CMK)
- Enable infrastructure encryption: Allows to double encrypt the data "for more security"
Storage Service | Endpoint |
---|---|
Blob storage | https://<storage-account>.blob.core.windows.net https://<stg-acc>.blob.core.windows.net/<container-name>?restype=container&comp=list |
Data Lake Storage | https://<storage-account>.dfs.core.windows.net |
Azure Files | https://<storage-account>.file.core.windows.net |
Queue storage | https://<storage-account>.queue.core.windows.net |
Table storage | https://<storage-account>.table.core.windows.net |
If "Allow Blob public access" is enabled (disabled by default), when creating a container it's possible to:
- Give public access to read blobs (you need to know the name).
- List container blobs and read them.
- Make it fully private
If you find any storage you can connect to you could use the tool Microsoft Azure Storage Explorer to do so.
It's possible to use Entra ID principals with RBAC roles to access storage accounts and it's the recommended way.
The storage accounts have access keys that can be used to access it. This provides full access to the storage account.
It's possible to generate Shared Keys signed with the access keys to authorize access to certain resources via a signed URL.
{% hint style="info" %}
Note that the CanonicalizedResource
part represents the storage services resource (URI). And if any part in the URL is encoded, it should also be encoded inside the CanonicalizedResource
.
{% endhint %}
{% hint style="info" %}
This is used by default by az
cli to authenticate requests. To make it use the Entra ID principal credentials indicate the param --auth-mode login
.
{% endhint %}
- It's possible to generate a shared key for blob, queue and file services signing the following information:
StringToSign = VERB + "\n" +
Content-Encoding + "\n" +
Content-Language + "\n" +
Content-Length + "\n" +
Content-MD5 + "\n" +
Content-Type + "\n" +
Date + "\n" +
If-Modified-Since + "\n" +
If-Match + "\n" +
If-None-Match + "\n" +
If-Unmodified-Since + "\n" +
Range + "\n" +
CanonicalizedHeaders +
CanonicalizedResource;
- It's possible to generate a shared key for table services signing the following information:
StringToSign = VERB + "\n" +
Content-MD5 + "\n" +
Content-Type + "\n" +
Date + "\n" +
CanonicalizedResource;
- It's possible to generate a lite shared key for blob, queue and file services signing the following information:
StringToSign = VERB + "\n" +
Content-MD5 + "\n" +
Content-Type + "\n" +
Date + "\n" +
CanonicalizedHeaders +
CanonicalizedResource;
- It's possible to generate a lite shared key for table services signing the following information:
StringToSign = Date + "\n"
CanonicalizedResource
Then, to use the key, it can be done in the Authorization header following the syntax:
Authorization="[SharedKey|SharedKeyLite] <AccountName>:<Signature>"
#e.g.
Authorization: SharedKey myaccount:ctzMq410TV3wS7upTBcunJTDLEJwMAZuFPfr0mrrA08=
PUT http://myaccount/mycontainer?restype=container&timeout=30 HTTP/1.1
x-ms-version: 2014-02-14
x-ms-date: Fri, 26 Jun 2015 23:39:12 GMT
Authorization: SharedKey myaccount:ctzMq410TV3wS7upTBcunJTDLEJwMAZuFPfr0mrrA08=
Content-Length: 0
Shared Access Signatures (SAS) are secure, time-limited URLs that grant specific permissions to access resources in an Azure Storage account without exposing the account's access keys. While access keys provide full administrative access to all resources, SAS allows for granular control by specifying permissions (like read or write) and defining an expiration time.
- User delegation SAS: This is created from an Entra ID principal which will sign the SAS and delegate the permissions from the user to the SAS. It can only be used with blob and data lake storage (docs). It's possible to revoke all generated user delegated SAS.
- Even if it's possible to generate a delegation SAS with "more" permissions than the ones the user has. However, if the principal doesn't have them, it won't work (no privesc).
- Service SAS: This is signed using one of the storage account access keys. It can be used to grant access to specific resources in a single storage service. If the key is renewed, the SAS will stop working.
- Account SAS: It's also signed with one of the storage account access keys. It grants access to resources across a storage account services (Blob, Queue, Table, File) and can include service-level operations.
A SAS URL signed by an access key looks like this:
https://<container_name>.blob.core.windows.net/newcontainer?sp=r&st=2021-09-26T18:15:21Z&se=2021-10-27T02:14:21Z&spr=https&sv=2021-07-08&sr=c&sig=7S%2BZySOgy4aA3Dk0V1cJyTSIf1cW%2Fu3WFkhHV32%2B4PE%3D
A SAS URL signed as a user delegation looks like this:
https://<container_name>.blob.core.windows.net/testing-container?sp=r&st=2024-11-22T15:07:40Z&se=2024-11-22T23:07:40Z&skoid=d77c71a1-96e7-483d-bd51-bd753aa66e62&sktid=fdd066e1-ee37-49bc-b08f-d0e152119b04&skt=2024-11-22T15:07:40Z&ske=2024-11-22T23:07:40Z&sks=b&skv=2022-11-02&spr=https&sv=2022-11-02&sr=c&sig=7s5dJyeE6klUNRulUj9TNL0tMj2K7mtxyRc97xbYDqs%3D
Note some http params:
- The
se
param indicates the expiration date of the SAS - The
sp
param indicates the permissions of the SAS - The
sig
is the signature validating the SAS
When generating a SAS it's needed to indicate the permissions that it should be granting. Depending on the objet the SAS is being generated over different permissions might be included. For example:
- (a)dd, (c)reate, (d)elete, (e)xecute, (f)ilter_by_tags, (i)set_immutability_policy, (l)ist, (m)ove, (r)ead, (t)ag, (w)rite, (x)delete_previous_version, (y)permanent_delete
Azure Blob Storage now supports the SSH File Transfer Protocol (SFTP), enabling secure file transfer and management directly to Blob Storage without requiring custom solutions or third-party products.
- Protocol Support: SFTP works with Blob Storage accounts configured with hierarchical namespace (HNS). This organizes blobs into directories and subdirectories for easier navigation.
- Security: SFTP uses local user identities for authentication and does not integrate with RBAC or ABAC. Each local user can authenticate via:
- Azure-generated passwords
- Public-private SSH key pairs
- Granular Permissions: Permissions such as Read, Write, Delete, and List can be assigned to local users for up to 100 containers.
- Networking Considerations: SFTP connections are made through port 22. Azure supports network configurations like firewalls, private endpoints, or virtual networks to secure SFTP traffic.
- Hierarchical Namespace: HNS must be enabled when creating the storage account.
- Supported Encryption: Requires Microsoft Security Development Lifecycle (SDL)-approved cryptographic algorithms (e.g., rsa-sha2-256, ecdsa-sha2-nistp256).
- SFTP Configuration:
- Enable SFTP on the storage account.
- Create local user identities with appropriate permissions.
- Configure home directories for users to define their starting location within the container.
Permission | Symbol | Description |
---|---|---|
Read | r |
Read file content. |
Write | w |
Upload files and create directories. |
List | l |
List contents of directories. |
Delete | d |
Delete files or directories. |
Create | c |
Create files or directories. |
Modify Ownership | o |
Change the owning user or group. |
Modify Permissions | p |
Change ACLs on files or directories. |
{% tabs %} {% tab title="az cli" %} {% code overflow="wrap" %}
# Get storage accounts
az storage account list #Get the account name from here
# BLOB STORAGE
## List containers
az storage container list --account-name <name>
## Check if public access is allowed
az storage container show-permission \
--account-name <acc-name> \
-n <container-name>
## Make a container public
az storage container set-permission \
--public-access container \
--account-name <acc-name> \
-n <container-name>
## List blobs in a container
az storage blob list \
--container-name <container name> \
--account-name <account name>
## Download blob
az storage blob download \
--account-name <account name> \
--container-name <container name> \
--name <blob name> \
--file </path/to/local/file>
## Create container policy
az storage container policy create \
--account-name mystorageaccount \
--container-name mycontainer \
--name fullaccesspolicy \
--permissions racwdl \
--start 2023-11-22T00:00Z \
--expiry 2024-11-22T00:00Z
# QUEUE
az storage queue list --account-name <name>
az storage message peek --account-name <name> --queue-name <queue-name>
# ACCESS KEYS
az storage account keys list --account-name <name>
## Check key policies (expiration time?)
az storage account show -n <name> --query "{KeyPolicy:keyPolicy}"
## Once having the key, it's possible to use it with the argument --account-key
## Enum blobs with account key
az storage blob list \
--container-name <container name> \
--account-name <account name> \
--account-key "ZrF40pkVKvWPUr[...]v7LZw=="
## Download a file using an account key
az storage blob download \
--account-name <account name> \
--account-key "ZrF40pkVKvWPUr[...]v7LZw==" \
--container-name <container name> \
--name <blob name> \
--file </path/to/local/file>
## Upload a file using an account key
az storage blob upload \
--account-name <account name> \
--account-key "ZrF40pkVKvWPUr[...]v7LZw==" \
--container-name <container name> \
--file </path/to/local/file>
# SAS
## List access policies
az storage <container|queue|share|table> policy list \
--account-name <acc name> \
--container-name <container name>
## Generate SAS with all permissions using an access key
az storage <container|queue|share|table|blob> generate-sas \
--permissions acdefilmrtwxy \
--expiry 2024-12-31T23:59:00Z \
--account-name <acc-name> \
-n <container-name>
## Generate SAS with all permissions using via user delegation
az storage <container|queue|share|table|blob> generate-sas \
--permissions acdefilmrtwxy \
--expiry 2024-12-31T23:59:00Z \
--account-name <acc-name> \
--as-user --auth-mode login \
-n <container-name>
## Generate account SAS
az storage account generate-sas \
--expiry 2024-12-31T23:59:00Z \
--account-name <acc-name> \
--services qt \
--resource-types sco \
--permissions acdfilrtuwxy
## Use the returned SAS key with the param --sas-token
## e.g.
az storage blob show \
--account-name <account name> \
--container-name <container name> \
--sas-token 'se=2024-12-31T23%3A59%3A00Z&sp=racwdxyltfmei&sv=2022-11-02&sr=c&sig=ym%2Bu%2BQp5qqrPotIK5/rrm7EMMxZRwF/hMWLfK1VWy6E%3D' \
--name 'asd.txt'
#Local-Users
## List users
az storage account local-user list \
--account-name <storage-account-name> \
--resource-group <resource-group-name>
## Get user
az storage account local-user show \
--account-name <storage-account-name> \
--resource-group <resource-group-name> \
--name <local-user-name>
## List keys
az storage account local-user list \
--account-name <storage-account-name> \
--resource-group <resource-group-name>
{% endcode %} {% endtab %}
{% tab title="Az PowerShell" %} {% code overflow="wrap" %}
# Get storage accounts
Get-AzStorageAccount | fl
# Get rules to access the storage account
Get-AzStorageAccount | select -ExpandProperty NetworkRuleSet
# Get IPs
(Get-AzStorageAccount | select -ExpandProperty NetworkRuleSet).IPRules
# Get containers of a storage account
Get-AzStorageContainer -Context (Get-AzStorageAccount -name <NAME> -ResourceGroupName <NAME>).context
# Get blobs inside container
Get-AzStorageBlob -Container epbackup-planetary -Context (Get-AzStorageAccount -name <name> -ResourceGroupName <name>).context
# Get a blob from a container
Get-AzStorageBlobContent -Container <NAME> -Context (Get-AzStorageAccount -name <NAME> -ResourceGroupName <NAME>).context -Blob <blob_name> -Destination .\Desktop\filename.txt
# Create a Container Policy
New-AzStorageContainerStoredAccessPolicy `
-Context (Get-AzStorageAccount -Name <NAME> -ResourceGroupName <NAME>).Context `
-Container <container-name> `
-Policy <policy-name> `
-Permission racwdl `
-StartTime (Get-Date "2023-11-22T00:00Z") `
-ExpiryTime (Get-Date "2024-11-22T00:00Z")
#Get Container policy
Get-AzStorageContainerStoredAccessPolicy `
-Context (Get-AzStorageAccount -Name <NAME> -ResourceGroupName <NAME>).Context `
-Container "storageaccount1994container"
# Queue Management
Get-AzStorageQueue -Context (Get-AzStorageAccount -Name <NAME> -ResourceGroupName <NAME>).Context
(Get-AzStorageQueue -Name <NAME> -Context (Get-AzStorageAccount -name <NAME> -ResourceGroupName <NAME>).Context).QueueClient.PeekMessage().Value
#Blob Container
Get-AzStorageBlob -Container <container-name> -Context $(Get-AzStorageAccount -name "teststorageaccount1998az" -ResourceGroupName "testStorageGroup").Context
Get-AzStorageBlobContent `
-Container <container-name> `
-Blob <blob-name> `
-Destination <local-path> `
-Context $(Get-AzStorageAccount -name "teststorageaccount1998az" -ResourceGroupName "testStorageGroup").Context
Set-AzStorageBlobContent `
-Container <container-name> `
-File <local-file-path> `
-Blob <blob-name> `
-Context $(Get-AzStorageAccount -name "teststorageaccount1998az" -ResourceGroupName "testStorageGroup").Context
# Shared Access Signatures (SAS)
Get-AzStorageContainerAcl `
-Container <container-name> `
-Context (Get-AzStorageAccount -Name <NAME> -ResourceGroupName <NAME>).Context
New-AzStorageBlobSASToken `
-Context $ctx `
-Container <container-name> `
-Blob <blob-name> `
-Permission racwdl `
-ExpiryTime (Get-Date "2024-12-31T23:59:00Z")
{% endcode %} {% endtab %} {% endtabs %}
{% content-ref url="az-file-shares.md" %} az-file-shares.md {% endcontent-ref %}
{% content-ref url="../az-privilege-escalation/az-storage-privesc.md" %} az-storage-privesc.md {% endcontent-ref %}
{% content-ref url="../az-post-exploitation/az-blob-storage-post-exploitation.md" %} az-blob-storage-post-exploitation.md {% endcontent-ref %}
{% content-ref url="../az-persistence/az-storage-persistence.md" %} az-storage-persistence.md {% endcontent-ref %}
- https://learn.microsoft.com/en-us/azure/storage/blobs/storage-blobs-introduction
- https://learn.microsoft.com/en-us/azure/storage/common/storage-sas-overview
- https://learn.microsoft.com/en-us/azure/storage/blobs/secure-file-transfer-protocol-support
{% hint style="success" %}
Learn & practice AWS Hacking:HackTricks Training AWS Red Team Expert (ARTE)
Learn & practice GCP Hacking: HackTricks Training GCP Red Team Expert (GRTE)
Support HackTricks
- Check the subscription plans!
- Join the 💬 Discord group or the telegram group or follow us on Twitter 🐦 @hacktricks_live.
- Share hacking tricks by submitting PRs to the HackTricks and HackTricks Cloud github repos.