Amazon Bedrock is a fully managed service that offers a choice of foundation models (FMs) along with a broad set of capabilities for building generative AI applications.
This module includes resources to deploy Bedrock features.
With Knowledge Bases for Amazon Bedrock, you can give FMs and agents contextual information from your company’s private data sources for Retrieval Augmented Generation (RAG) to deliver more relevant, accurate, and customized responses.
A vector index on a vector store is required to create a Knowledge Base. This construct currently supports Amazon OpenSearch Serverless, Amazon RDS Aurora PostgreSQL, Pinecone, and MongoDB. By default, this resource will create an OpenSearch Serverless vector collection and index for each Knowledge Base you create, but you can provide an existing collection to have more control. For other resources you need to have the vector stores already created and credentials stored in AWS Secrets Manager.
The resource accepts an instruction prop that is provided to any Bedrock Agent it is associated with so the agent can decide when to query the Knowledge Base.
To create a knowledge base, make sure you pass in the appropriate variables and set the create_kb
variable to true
.
Example default Opensearch Serverless Agent with Knowledgebase
provider "opensearch" {
url = module.bedrock.default_collection[0].collection_endpoint
healthcheck = false
}
module "bedrock" {
source = "aws-ia/bedrock/aws"
version = "0.0.5"
create_kb = true
create_default_kb = true
foundation_model = "anthropic.claude-v2"
instruction = "You are an automotive assisant who can provide detailed information about cars to a customer."
}
Data sources are the various repositories or systems from which information is extracted and ingested into the knowledge base. These sources provide the raw content that will be processed, indexed, and made available for querying within the knowledge base system. Data sources can include various types of systems such as document management systems, databases, file storage systems, and content management platforms. Suuported Data Sources include Amazon S3 buckets, Web Crawlers, SharePoint sites, Salesforce instances, and Confluence spaces.
-
Amazon S3. You can either create a new data source by passing in the existing data source arn to the input variable
kb_s3_data_source
or create a new one by settingcreate_s3_data_source
to true. -
Web Crawler. You can create a new web crawler data source by setting the
create_web_crawler
input variable to true and passing in the necessary variables for urls, scope, etc. -
SharePoint. You can create a new SharePoint data source by setting the
create_sharepoint
input variable to true and passing in the necessary variables for site urls, filter patterns, etc. -
Salesforce. You can create a new Salesforce data source by setting the
create_salesforce
input variable to true and passing in the necessary variables for site urls, filter patterns, etc. -
Confluence. You can create a new Confluence data source by setting the
create_confluence
input variable to true and passing in the necessary variables for site urls, filter patterns, etc.
Enable generative AI applications to execute multistep tasks across company systems and data sources.
The following example creates an Agent with a simple instruction and without any action groups or knowedlge bases.
module "bedrock" {
source = "aws-ia/bedrock/aws"
version = "0.0.5"
foundation_model = "anthropic.claude-v2"
instruction = "You are an automotive assisant who can provide detailed information about cars to a customer."
}
To create an Agent with a default Knowledge Base you simply set create_kb
and create_default_kb
to true
:
module "bedrock" {
source = "aws-ia/bedrock/aws"
version = "0.0.5"
create_kb = true
create_default_kb = true
foundation_model = "anthropic.claude-v2"
instruction = "You are an automotive assisant who can provide detailed information about cars to a customer."
}
An action group defines functions your agent can call. The functions are Lambda functions. The action group uses an OpenAPI schema to tell the agent what your functions do and how to call them. You can configure an action group by passing in the appropriate input variables.
The Agent constructs take an optional parameter shouldPrepareAgent to indicate that the Agent should be prepared after any updates to an agent, Knowledge Base association, or action group. This may increase the time to create and update those resources. By default, this value is true.
Bedrock Agents allows you to customize the prompts and LLM configuration for its different steps. You can disable steps or create a new prompt template. Prompt templates can be inserted from plain text files.
Amazon Bedrock's Guardrails feature enables you to implement robust governance and control mechanisms for your generative AI applications, ensuring alignment with your specific use cases and responsible AI policies. Guardrails empowers you to create multiple tailored policy configurations, each designed to address the unique requirements and constraints of different use cases. These policy configurations can then be seamlessly applied across multiple foundation models (FMs) and Agents, ensuring a consistent user experience and standardizing safety, security, and privacy controls throughout your generative AI ecosystem.
With Guardrails, you can define and enforce granular, customizable policies to precisely govern the behavior of your generative AI applications. You can configure the following policies in a guardrail to avoid undesirable and harmful content and remove sensitive information for privacy protection.
Content filters – Adjust filter strengths to block input prompts or model responses containing harmful content.
Denied topics – Define a set of topics that are undesirable in the context of your application. These topics will be blocked if detected in user queries or model responses.
Word filters – Configure filters to block undesirable words, phrases, and profanity. Such words can include offensive terms, competitor names etc.
Sensitive information filters – Block or mask sensitive information such as personally identifiable information (PII) or custom regex in user inputs and model responses.
You can create a Guardrail by setting create_guardrail
to true and passing in the appropriate input variables:
module "bedrock" {
source = "aws-ia/bedrock/aws"
version = "0.0.5"
create_kb = false
create_default_kb = false
create_guardrail = true
blocked_input = "I can provide general info about services, but can't fully address your request here. For personalized help or detailed questions, please contact our customer service team directly. For security reasons, avoid sharing sensitive information through this channel. If you have a general product question, feel free to ask without including personal details."
blocked_output = "I can provide general info about services, but can't fully address your request here. For personalized help or detailed questions, please contact our customer service team directly. For security reasons, avoid sharing sensitive information through this channel. If you have a general product question, feel free to ask without including personal details."
filters_config = [
{
input_strength = "MEDIUM"
output_strength = "MEDIUM"
type = "HATE"
},
{
input_strength = "HIGH"
output_strength = "HIGH"
type = "VIOLENCE"
}
]
pii_entities_config = [
{
action = "BLOCK"
type = "NAME"
},
{
action = "BLOCK"
type = "DRIVER_ID"
},
{
action = "ANONYMIZE"
type = "USERNAME"
},
]
regexes_config = [{
action = "BLOCK"
description = "example regex"
name = "regex_example"
pattern = "^\\d{3}-\\d{2}-\\d{4}$"
}]
managed_word_lists_config = [{
type = "PROFANITY"
}]
words_config = [{
text = "HATE"
}]
topics_config = [{
name = "investment_topic"
examples = ["Where should I invest my money ?"]
type = "DENY"
definition = "Investment advice refers to inquiries, guidance, or recommendations regarding the management or allocation of funds or assets with the goal of generating returns ."
}]
foundation_model = "anthropic.claude-v2"
instruction = "You are an automotive assisant who can provide detailed information about cars to a customer."
}
Amazon Bedrock provides the ability to create and save prompts using Prompt management so that you can save time by applying the same prompt to different workflows. You can include variables in the prompt so that you can adjust the prompt for different use case.
Prompt variants in the context of Amazon Bedrock refer to alternative configurations of a prompt, including its message or the model and inference configurations used. Prompt variants allow you to create different versions of a prompt, test them, and save the variant that works best for your use case. You can add prompt variants to a prompt by passing in the values for the variants_list
variable:
variants_list = [
{
name = "variant-example"
template_type = "TEXT"
model_id = "amazon.titan-text-express-v1"
inference_configuration = {
text = {
temperature = 1
top_p = 0.9900000095367432
max_tokens = 300
stop_sequences = ["User:"]
top_k = 250
}
}
template_configuration = {
text = {
input_variables = [
{
name = "topic"
}
]
text = "Make me a {{genre}} playlist consisting of the following number of songs: {{number}}."
}
}
}
]
A prompt version is a snapshot of a prompt at a specific point in time that you create when you are satisfied with a set of configurations. Versions allow you to deploy your prompt and easily switch between different configurations for your prompt and update your application with the most appropriate version for your use-case.
You can create a Prompt version by setting create_prompt_version
to true and adding an optional prompt_version_description
and optional prompt_version_tags
.
Creating a prompt with a prompt version would look like:
module "bedrock" {
source = "../.." # local example
create_kb = false
create_default_kb = false
create_s3_data_source = false
create_agent = false
# Prompt Management
prompt_name = "prompt"
default_variant = "variant-example"
create_prompt = true
create_prompt_version = true
prompt_version_description = "Example prompt version"
variants_list = [
{
name = "variant-example"
template_type = "TEXT"
model_id = "amazon.titan-text-express-v1"
inference_configuration = {
text = {
temperature = 1
top_p = 0.9900000095367432
max_tokens = 300
stop_sequences = ["User:"]
top_k = 250
}
}
template_configuration = {
text = {
input_variables = [
{
name = "topic"
}
]
text = "Make me a {{genre}} playlist consisting of the following number of songs: {{number}}."
}
}
}
]
}