Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add WildGuard Guardrail Microservice #710

Merged
merged 24 commits into from
Oct 11, 2024

Conversation

daniel-de-leon-user293
Copy link
Contributor

@daniel-de-leon-user293 daniel-de-leon-user293 commented Sep 19, 2024

Description

Add WildGuard to guard against privacy, misinformation, harmful language and malicious use on user input prompts and/or output responses generated by LLMs.

Issues

n/a

Type of change

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds new functionality)
  • Breaking change (fix or feature that would break existing design and interface)
  • Others (enhancement, documentation, validation, etc.)

Dependencies

n/a

Tests

  • Successfully ran:
    • guardrails_tgi.py microservice script directly without container
    • with docker run CLI
    • with docker compose

@letonghan
Copy link
Collaborator

letonghan commented Sep 19, 2024

Hi @daniel-de-leon-user293 , please add the test script for WildGuard in test/guardrails folder like this.
Name the test script as test_guardrails_wild_guard_langchain_on_intel_hpu.sh since you use the hpu version of tgi.
Thanks : )

@qgao007
Copy link
Collaborator

qgao007 commented Sep 23, 2024

Hi @daniel-de-leon-user293 ,
Ebi pointed it out to me earlier for my PR, mentioning that the following needs to be added
Also the new Dockerfile introduced by the PR needs to have an entry at:
.github/workflows/docker/compose/guardrails-compose-cd.yaml

@chensuyue chensuyue added this to the v1.1 milestone Sep 24, 2024
Copy link
Collaborator

@ashahba ashahba left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @daniel-de-leon-user293 for this PR!
I only have a few minor change requests.

comps/guardrails/wildguard/langchain/README.md Outdated Show resolved Hide resolved
comps/guardrails/wildguard/langchain/README.md Outdated Show resolved Hide resolved
Signed-off-by: Daniel Deleon <[email protected]>
Copy link
Collaborator

@ashahba ashahba left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

@dcmiddle
Copy link
Contributor

dcmiddle commented Oct 4, 2024

@daniel-de-leon-user293 Consider adding documentation or commit message explaining why someone would use wildguard instead of llama guard.

@mkbhanda does OPEA have criteria for when it will accept similar features?
It's nice to have a variety of options if you know what you are looking for but having to choose among options also detracts from ease of use.

@daniel-de-leon-user293
Copy link
Contributor Author

Thank you for your suggestion @dcmiddle. The latest commit adds a bit more description of wild guard in the README. To answer your question, we’re hoping to provide a diverse set of safety models that users can pick from.

@dcmiddle
Copy link
Contributor

dcmiddle commented Oct 6, 2024

Thank you for your suggestion @dcmiddle. The latest commit adds a bit more description of wild guard in the README. To answer your question, we’re hoping to provide a diverse set of safety models that users can pick from.

cool. so can this be used in conjunction with llama guard? From the description you added it looks like a complementary list of topics.

@daniel-de-leon-user293
Copy link
Contributor Author

No, it can be used in place of Llama Guard. The lists in the README are risk taxonomies that each model was trained to identify according to their respective datasets. Although similar, the models provide different classification performance for difference use cases.

If a user wanted to design an ensemble of guardrails, however, then in that case, these two models could be used in conjunction.

@sunstonesecure-robert
Copy link

It's nice to have a variety of options if you know what you are looking for but having to choose among options also detracts from ease of use.

I'd assert it's essential to have a variety of options AND you need to know what you are looking for if you are building LLM apps where security+privacy+bias detection > ease of use

@ashahba ashahba merged commit 5bb4046 into opea-project:main Oct 11, 2024
9 of 10 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

9 participants