This page covers how to use the Prediction Guard ecosystem within LangChain. It is broken into two parts: installation and setup, and then references to specific Prediction Guard wrappers.
- Install the PredictionGuard Langchain partner package:
pip install langchain-predictionguard
- Get a Prediction Guard API key (as described here) and set it as an environment variable (
PREDICTIONGUARD_API_KEY
)
API | Description | Endpoint Docs | Import | Example Usage |
---|---|---|---|---|
Chat | Build Chat Bots | Chat | from langchain_predictionguard import ChatPredictionGuard |
ChatPredictionGuard.ipynb |
Completions | Generate Text | Completions | from langchain_predictionguard import PredictionGuard |
PredictionGuard.ipynb |
Text Embedding | Embed String to Vectores | Embeddings | from langchain_predictionguard import PredictionGuardEmbeddings |
PredictionGuardEmbeddings.ipynb |
See a usage example
from langchain_predictionguard import ChatPredictionGuard
# If predictionguard_api_key is not passed, default behavior is to use the `PREDICTIONGUARD_API_KEY` environment variable.
chat = ChatPredictionGuard(model="Hermes-3-Llama-3.1-8B")
chat.invoke("Tell me a joke")
See a usage example
from langchain_predictionguard import PredictionGuardEmbeddings
# If predictionguard_api_key is not passed, default behavior is to use the `PREDICTIONGUARD_API_KEY` environment variable.
embeddings = PredictionGuardEmbeddings(model="bridgetower-large-itm-mlm-itc")
text = "This is an embedding example."
output = embeddings.embed_query(text)
See a usage example
from langchain_predictionguard import PredictionGuard
# If predictionguard_api_key is not passed, default behavior is to use the `PREDICTIONGUARD_API_KEY` environment variable.
llm = PredictionGuard(model="Hermes-2-Pro-Llama-3-8B")
llm.invoke("Tell me a joke about bears")