This project uses OpenAI's embedding vector for storing data through API call. It required OPENAI_API_KEY
to be set in
environment variable. It uses RAG
with the combination of LangChain
for retrival of embedding vector from vector
database.
Folder structure
notebooks:
- It consists of jupyter notebook
- Different test been tried w.r.t to Generative AI models (like flan, qa, bert etc)
resources:
- It consists of extra resources (like templates; sample questions)
- It also consists of
chromaDB
on-disk files
configs:
- configuration files
main.py:
- It's a
main
function call
Technology Stack
fastAPI
been used as a microserviceLangchain
for building pipeline across sections of generative AI modelChromaDB
for vector database storageOpenAI
for API callRAG
for retrievalFLAN-T5
for fine-tuning modelBERT
for fine-tuning model (roberta flavours)