Skip to content

Code repository for our paper, "Medical Large Language Models are Vulnerable to Data Poisoning Attacks" (Nature Medicine, 2024).

License

Notifications You must be signed in to change notification settings

nyuolab/llm-knowledge-graphs

Repository files navigation

Simple Algorithm to Validate Medical LLM Outputs Using Knowledge Graphs

Code repository for our paper, "Medical Large Language Models are Vulnerable to Data Poisoning Attacks" (Nature Medicine, 2024).

  1. Install miniconda (https://docs.anaconda.com/free/miniconda/)
  2. Create a conda environment: conda create -n defense-algorithm python=3.11
  3. Activate the environment: conda activate defense-algorithm
  4. Change to this directory: cd <path/to/this/dir>
  5. Install requirements using pip: pip install -r requirements.txt
  6. Run the script using the toy dataset: python screen_outputs.py

Note: The implemented embedding models possess their own licensing agreements.

About

Code repository for our paper, "Medical Large Language Models are Vulnerable to Data Poisoning Attacks" (Nature Medicine, 2024).

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages