Skip to content

Interface to self-hosted large language models and vector databases to provide improved Integreat Chat functionality

License

Notifications You must be signed in to change notification settings

digitalfabrik/integreat-chat

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

62 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

About

RAG/LLM supported online migration counseling service & improved Integreat search engine. It integrates as a chat service into the Integreat App and presents requests in a Zammad to counselors. The solution aims to be privacy friendly by not using any third party LLM services.

Start Project

  1. Install a virtual environment and activate it
    python3 -m venv .venv
    source .venv/bin/activate
    
  2. Install all dependencies
    pip install .
    
  3. Run the server:
    cd integreat_chat
    python3 manage.py migrate
    python3 manage.py runserver
    

Configuration

Back End

  • Deploy as normal Django application. No database is needed.

Zammad Integration

To integrate Zammad, the following configuration has to be set:

  • Webhook to https://integreat-cms.example.com/api/v3/webhook/zammad/?token=$REGION_TOKEN
  • Trigger for webhook:
    • Conditions: Action is updated, Subject contains not "automatically generated message"
    • Execute: webhook configured above
  • Auto response for new tickets in each language, exmple is EN
    • Conditions: State is new, Action is updated, Subject contains not "automatically generated message", Title contains [EN]
    • Execute: Email, visibility public, Recipient Customer, Subject "automatically generated message", add a message fitting the needs

About

Interface to self-hosted large language models and vector databases to provide improved Integreat Chat functionality

Resources

License

Stars

Watchers

Forks

Languages