You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your feature request related to a problem?
LLM may generate biased/impolite/hate outputs. We need to build guardrail to block such result.
What solution would you like?
Add basic guardrail based on stop words and regex. If model input and output contains stop word or regex, will throw exception. This will be released in 2.13.
What alternatives have you considered?
No
Do you have any additional context?
We plan to add guardrail based on other guardrail service and semantic after 2.13.
The text was updated successfully, but these errors were encountered:
ylwu-amzn
changed the title
[FEATURE] Guardrails for remote model input and output
[FEATURE] Basic guardrails for remote model input and output
Mar 25, 2024
Is your feature request related to a problem?
LLM may generate biased/impolite/hate outputs. We need to build guardrail to block such result.
What solution would you like?
Add basic guardrail based on stop words and regex. If model input and output contains stop word or regex, will throw exception. This will be released in 2.13.
What alternatives have you considered?
No
Do you have any additional context?
We plan to add guardrail based on other guardrail service and semantic after 2.13.
The text was updated successfully, but these errors were encountered: