Skip to content

Commit

Permalink
Add investigation guide for Amazon Bedrock Rules (#4247)
Browse files Browse the repository at this point in the history
* Add investigation guide for Amazon Bedrock Rules

* updated date

* review comments

* review comments

---------

Co-authored-by: Terrance DeJesus <[email protected]>

(cherry picked from commit 6a39009)
  • Loading branch information
shashank-elastic authored and github-actions[bot] committed Nov 6, 2024
1 parent ff6a6cb commit ebb9b45
Show file tree
Hide file tree
Showing 6 changed files with 213 additions and 7 deletions.
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
[metadata]
creation_date = "2024/05/02"
maturity = "production"
updated_date = "2024/10/09"
updated_date = "2024/11/05"
min_stack_comments = "ES|QL rule type is still in technical preview as of 8.13, however this rule was tested successfully; integration in tech preview"
min_stack_version = "8.13.0"

Expand All @@ -26,6 +26,42 @@ references = [
]
risk_score = 47
rule_id = "0cd2f3e6-41da-40e6-b28b-466f688f00a6"
note = """## Triage and analysis
### Investigating Amazon Bedrock Guardrail Multiple Policy Violations by a Single User Over a Session.
Amazon Bedrock Guardrail is a set of features within Amazon Bedrock designed to help businesses apply robust safety and privacy controls to their generative AI applications.
It enables users to set guidelines and filters that manage content quality, relevancy, and adherence to responsible AI practices.
Through Guardrail, organizations can define "denied topics" to prevent the model from generating content on specific, undesired subjects,
and they can establish thresholds for harmful content categories, including hate speech, violence, or offensive language.
#### Possible investigation steps
- Identify the user account that caused multiple policy violations over a session and whether it should perform this kind of action.
- Investigate the user activity that might indicate a potential brute force attack.
- Investigate other alerts associated with the user account during the past 48 hours.
- Consider the time of day. If the user is a human (not a program or script), did the activity take place during a normal time of day?
- Examine the account's prompts and responses in the last 24 hours.
- If you suspect the account has been compromised, scope potentially compromised assets by tracking Amazon Bedrock model access, prompts generated, and responses to the prompts by the account in the last 24 hours.
### False positive analysis
- Verify the user account that caused multiple policy violations by a single user over session, is not testing any new model deployments or updated compliance policies in Amazon Bedrock guardrails.
### Response and remediation
- Initiate the incident response process based on the outcome of the triage.
- Disable or limit the account during the investigation and response.
- Identify the possible impact of the incident and prioritize accordingly; the following actions can help you gain context:
- Identify the account role in the cloud environment.
- Identify if the attacker is moving laterally and compromising other Amazon Bedrock Services.
- Identify any regulatory or legal ramifications related to this activity.
- Review the permissions assigned to the implicated user group or role behind these requests to ensure they are authorized and expected to access bedrock and ensure that the least privilege principle is being followed.
- Determine the initial vector abused by the attacker and take action to prevent reinfection via the same vector.
- Using the incident response data, update logging and audit policies to improve the mean time to detect (MTTD) and the mean time to respond (MTTR).
"""
setup = """## Setup
This rule requires that guardrails are configured in AWS Bedrock. For more information, see the AWS Bedrock documentation:
Expand Down
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
[metadata]
creation_date = "2024/05/02"
maturity = "production"
updated_date = "2024/10/09"
updated_date = "2024/11/05"
min_stack_comments = "ES|QL rule type is still in technical preview as of 8.13, however this rule was tested successfully; integration in tech preview"
min_stack_version = "8.13.0"

Expand All @@ -26,6 +26,42 @@ references = [
]
risk_score = 21
rule_id = "f4c2515a-18bb-47ce-a768-1dc4e7b0fe6c"
note = """## Triage and analysis
### Investigating Amazon Bedrock Guardrail Multiple Policy Violations Within a Single Blocked Request.
Amazon Bedrock Guardrail is a set of features within Amazon Bedrock designed to help businesses apply robust safety and privacy controls to their generative AI applications.
It enables users to set guidelines and filters that manage content quality, relevancy, and adherence to responsible AI practices.
Through Guardrail, organizations can define "denied topics" to prevent the model from generating content on specific, undesired subjects,
and they can establish thresholds for harmful content categories, including hate speech, violence, or offensive language.
#### Possible investigation steps
- Identify the user account and the user request that caused multiple policy violations and whether it should perform this kind of action.
- Investigate the user activity that might indicate a potential brute force attack.
- Investigate other alerts associated with the user account during the past 48 hours.
- Consider the time of day. If the user is a human (not a program or script), did the activity take place during a normal time of day?
- Examine the account's prompts and responses in the last 24 hours.
- If you suspect the account has been compromised, scope potentially compromised assets by tracking Amazon Bedrock model access, prompts generated, and responses to the prompts by the account in the last 24 hours.
### False positive analysis
- Verify the user account that caused multiple policy violations, is not testing any new model deployments or updated compliance policies in Amazon Bedrock guardrails.
### Response and remediation
- Initiate the incident response process based on the outcome of the triage.
- Disable or limit the account during the investigation and response.
- Identify the possible impact of the incident and prioritize accordingly; the following actions can help you gain context:
- Identify the account role in the cloud environment.
- Identify if the attacker is moving laterally and compromising other Amazon Bedrock Services.
- Identify any regulatory or legal ramifications related to this activity.
- Review the permissions assigned to the implicated user group or role behind these requests to ensure they are authorized and expected to access bedrock and ensure that the least privilege principle is being followed.
- Determine the initial vector abused by the attacker and take action to prevent reinfection via the same vector.
- Using the incident response data, update logging and audit policies to improve the mean time to detect (MTTD) and the mean time to respond (MTTR).
"""
setup = """## Setup
This rule requires that guardrails are configured in AWS Bedrock. For more information, see the AWS Bedrock documentation:
Expand Down
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
[metadata]
creation_date = "2024/05/05"
maturity = "production"
updated_date = "2024/10/23"
updated_date = "2024/11/05"
min_stack_comments = "ES|QL rule type is still in technical preview as of 8.13, however this rule was tested successfully; integration in tech preview"
min_stack_version = "8.13.0"

Expand All @@ -25,6 +25,41 @@ references = [
]
risk_score = 73
rule_id = "4f855297-c8e0-4097-9d97-d653f7e471c4"
note = """## Triage and analysis
### Investigating Amazon Bedrock Guardrail High Confidence Misconduct Blocks.
Amazon Bedrock Guardrail is a set of features within Amazon Bedrock designed to help businesses apply robust safety and privacy controls to their generative AI applications.
It enables users to set guidelines and filters that manage content quality, relevancy, and adherence to responsible AI practices.
Through Guardrail, organizations can define "denied topics" to prevent the model from generating content on specific, undesired subjects,
and they can establish thresholds for harmful content categories, including hate speech, violence, or offensive language.
#### Possible investigation steps
- Identify the user account that queried denied topics and whether it should perform this kind of action.
- Investigate other alerts associated with the user account during the past 48 hours.
- Consider the time of day. If the user is a human (not a program or script), did the activity take place during a normal time of day?
- Examine the account's prompts and responses in the last 24 hours.
- If you suspect the account has been compromised, scope potentially compromised assets by tracking Amazon Bedrock model access, prompts generated, and responses to the prompts by the account in the last 24 hours.
### False positive analysis
- Verify the user account that queried denied topics, is not testing any new model deployments or updated compliance policies in Amazon Bedrock guardrails.
### Response and remediation
- Initiate the incident response process based on the outcome of the triage.
- Disable or limit the account during the investigation and response.
- Identify the possible impact of the incident and prioritize accordingly; the following actions can help you gain context:
- Identify the account role in the cloud environment.
- Identify if the attacker is moving laterally and compromising other Amazon Bedrock Services.
- Identify any regulatory or legal ramifications related to this activity.
- Review the permissions assigned to the implicated user group or role behind these requests to ensure they are authorized and expected to access bedrock and ensure that the least privilege principle is being followed.
- Determine the initial vector abused by the attacker and take action to prevent reinfection via the same vector.
- Using the incident response data, update logging and audit policies to improve the mean time to detect (MTTD) and the mean time to respond (MTTR).
"""
setup = """## Setup
This rule requires that guardrails are configured in AWS Bedrock. For more information, see the AWS Bedrock documentation:
Expand Down
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
[metadata]
creation_date = "2024/05/04"
maturity = "production"
updated_date = "2024/10/09"
updated_date = "2024/11/05"
min_stack_comments = "ES|QL rule type is still in technical preview as of 8.13, however this rule was tested successfully; integration in tech preview"
min_stack_version = "8.13.0"

Expand All @@ -25,6 +25,40 @@ references = [
]
risk_score = 47
rule_id = "b1773d05-f349-45fb-9850-287b8f92f02d"
note = """## Triage and analysis
### Investigating Amazon Bedrock Models High Token Count and Large Response Sizes.
Amazon Bedrock is AWS’s managed service that enables developers to build and scale generative AI applications using large foundation models (FMs) from top providers.
Bedrock offers a variety of pretrained models from Amazon (such as the Titan series), as well as models from providers like Anthropic, Meta, Cohere, and AI21 Labs.
#### Possible investigation steps
- Identify the user account that used high prompt token counts and whether it should perform this kind of action.
- Investigate large response sizes and the number of requests made by the user account.
- Investigate other alerts associated with the user account during the past 48 hours.
- Consider the time of day. If the user is a human (not a program or script), did the activity take place during a normal time of day?
- Examine the account's prompts and responses in the last 24 hours.
- If you suspect the account has been compromised, scope potentially compromised assets by tracking Amazon Bedrock model access, prompts generated, and responses to the prompts by the account in the last 24 hours.
### False positive analysis
- Verify the user account that used high prompt and large response sizes, has a business justification for the heavy usage of the system.
### Response and remediation
- Initiate the incident response process based on the outcome of the triage.
- Disable or limit the account during the investigation and response.
- Identify the possible impact of the incident and prioritize accordingly; the following actions can help you gain context:
- Identify the account role in the cloud environment.
- Identify if the attacker is moving laterally and compromising other Amazon Bedrock Services.
- Identify any regulatory or legal ramifications related to this activity.
- Identify potential resource exhaustion and impact on billing.
- Review the permissions assigned to the implicated user group or role behind these requests to ensure they are authorized and expected to access bedrock and ensure that the least privilege principle is being followed.
- Determine the initial vector abused by the attacker and take action to prevent reinfection via the same vector.
- Using the incident response data, update logging and audit policies to improve the mean time to detect (MTTD) and the mean time to respond (MTTR).
"""
setup = """## Setup
This rule requires that guardrails are configured in AWS Bedrock. For more information, see the AWS Bedrock documentation:
Expand Down
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
[metadata]
creation_date = "2024/05/02"
maturity = "production"
updated_date = "2024/10/09"
updated_date = "2024/11/05"
min_stack_comments = "ES|QL rule type is still in technical preview as of 8.13, however this rule was tested successfully; integration in tech preview"
min_stack_version = "8.13.0"

Expand All @@ -26,6 +26,38 @@ references = [
]
risk_score = 73
rule_id = "17261da3-a6d0-463c-aac8-ea1718afcd20"
note = """## Triage and analysis
### Investigating Attempt to use Denied Amazon Bedrock Models.
Amazon Bedrock is AWS’s managed service that enables developers to build and scale generative AI applications using large foundation models (FMs) from top providers.
Bedrock offers a variety of pretrained models from Amazon (such as the Titan series), as well as models from providers like Anthropic, Meta, Cohere, and AI21 Labs.
#### Possible investigation steps
- Identify the user account that attempted to use denied models.
- Investigate other alerts associated with the user account during the past 48 hours.
- Consider the time of day. If the user is a human (not a program or script), did the activity take place during a normal time of day?
- Examine the account's attempts to access Amazon Bedrock models in the last 24 hours.
- If you suspect the account has been compromised, scope potentially compromised assets by tracking Amazon Bedrock model access, prompts generated, and responses to the prompts by the account in the last 24 hours.
### False positive analysis
- Verify the user account that attempted to use denied models, is a legitimate misunderstanding by users or overly strict policies.
### Response and remediation
- Initiate the incident response process based on the outcome of the triage.
- Disable or limit the account during the investigation and response.
- Identify the possible impact of the incident and prioritize accordingly; the following actions can help you gain context:
- Identify the account role in the cloud environment.
- Identify if the attacker is moving laterally and compromising other Amazon Bedrock Services.
- Identify any regulatory or legal ramifications related to this activity.
- Review the permissions assigned to the implicated user group or role behind these requests to ensure they are authorized and expected to access bedrock and ensure that the least privilege principle is being followed.
- Determine the initial vector abused by the attacker and take action to prevent reinfection via the same vector.
- Using the incident response data, update logging and audit policies to improve the mean time to detect (MTTD) and the mean time to respond (MTTR).
"""
setup = """## Setup
This rule requires that guardrails are configured in AWS Bedrock. For more information, see the AWS Bedrock documentation:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
creation_date = "2024/09/11"
integration = ["aws_bedrock"]
maturity = "production"
updated_date = "2024/10/09"
updated_date = "2024/11/05"
min_stack_comments = "ES|QL rule type is still in technical preview as of 8.13, however this rule was tested successfully; integration in tech preview"
min_stack_version = "8.13.0"

Expand All @@ -15,7 +15,7 @@ These errors also occur when you use an inference parameter for one model with a
This could indicate attempts to bypass limitations of other approved models, or to force an impact on the environment by incurring
exhorbitant costs.
"""
false_positives = ["Legitimate misunderstanding by users or overly strict policies"]
false_positives = ["Legitimate misunderstanding by users on accessing the bedrock models."]
from = "now-60m"
interval = "10m"
language = "esql"
Expand All @@ -29,6 +29,39 @@ references = [
]
risk_score = 73
rule_id = "725a048a-88c5-4fc7-8677-a44fc0031822"
note = """## Triage and analysis
### Investigating Amazon Bedrock Model Validation Exception Errors.
Amazon Bedrock is AWS’s managed service that enables developers to build and scale generative AI applications using large foundation models (FMs) from top providers.
Bedrock offers a variety of pretrained models from Amazon (such as the Titan series), as well as models from providers like Anthropic, Meta, Cohere, and AI21 Labs.
#### Possible investigation steps
- Identify the user account that caused validation errors in accessing the Amazon Bedrock models.
- Investigate other alerts associated with the user account during the past 48 hours.
- Consider the time of day. If the user is a human (not a program or script), did the activity take place during a normal time of day?
- Examine the account's attempts to access Amazon Bedrock models in the last 24 hours.
- If you suspect the account has been compromised, scope potentially compromised assets by tracking Amazon Bedrock model access, prompts generated, and responses to the prompts by the account in the last 24 hours.
### False positive analysis
- Verify the user account that that caused validation errors is a legitimate misunderstanding by users on accessing the bedrock models.
### Response and remediation
- Initiate the incident response process based on the outcome of the triage.
- Disable or limit the account during the investigation and response.
- Identify the possible impact of the incident and prioritize accordingly; the following actions can help you gain context:
- Identify the account role in the cloud environment.
- Identify if the attacker is moving laterally and compromising other Amazon Bedrock Services.
- Identify any regulatory or legal ramifications related to this activity.
- Identify if any implication to resource billing.
- Review the permissions assigned to the implicated user group or role behind these requests to ensure they are authorized and expected to access bedrock and ensure that the least privilege principle is being followed.
- Determine the initial vector abused by the attacker and take action to prevent reinfection via the same vector.
- Using the incident response data, update logging and audit policies to improve the mean time to detect (MTTD) and the mean time to respond (MTTR).
"""
setup = """## Setup
This rule requires that AWS Bedrock Integration be configured. For more information, see the AWS Bedrock integration documentation:
Expand Down

0 comments on commit ebb9b45

Please sign in to comment.