Skip to content

A collection of awesome resources related AI security

License

Notifications You must be signed in to change notification settings

ottosulin/awesome-ai-security

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

54 Commits
 
 
 
 

Repository files navigation

Awesome AI Security Awesome Track Awesome List

A curated list of awesome AI security related frameworks, attacks, tools and papers. Inspired by awesome-machine-learning.

If you want to contribute, create a PR or contact me @ottosulin.

Related awesome lists

Frameworks and standards

Taxonomies and terminology

Offensive tools and frameworks

Generic

  • Malware Env for OpenAI Gym - makes it possible to write agents that learn to manipulate PE files (e.g., malware) to achieve some objective (e.g., bypass AV) based on a reward provided by taking specific manipulation actions
  • Deep-pwning - a lightweight framework for experimenting with machine learning models with the goal of evaluating their robustness against a motivated adversary
  • Counterfit - generic automation layer for assessing the security of machine learning systems
  • DeepFool - A simple and accurate method to fool deep neural networks
  • garak - security probing tool for LLMs
  • Snaike-MLFlow - MLflow red team toolsuite
  • HackGPT - A tool using ChatGPT for hacking
  • HackingBuddyGPT - An automatic pentester (+ corresponding [benchmark dataset](https://github.com/ipa -lab/hacking-benchmark))
  • Charcuterie - code execution techniques for ML or ML adjacent libraries
  • OffsecML Playbook - A collection of offensive and adversarial TTP's with proofs of concept

Adversarial

Poisoning

  • BadDiffusion - Official repo to reproduce the paper "How to Backdoor Diffusion Models?" published at CVPR 2023

Privacy

  • PrivacyRaven - privacy testing library for deep learning systems

Defensive tools and frameworks

Safety and prevention

  • Guardrail.ai - Guardrails is a Python package that lets a user add structure, type and quality guarantees to the outputs of large language models (LLMs)
  • CodeGate - An open-source, privacy-focused project that acts as a layer of security within a developers Code Generation AI workflow

Detection

  • ProtectAI's model scanner - Security scanner detecting serialized ML Models performing suspicious actions
  • rebuff - Prompt Injection Detector
  • langkit - LangKit is an open-source text metrics toolkit for monitoring language models. The toolkit various security related metrics that can be used to detect attacks
  • StringSifter - A machine learning tool that ranks strings based on their relevance for malware analysis

Privacy and confidentiality

  • Python Differential Privacy Library
  • Diffprivlib - The IBM Differential Privacy Library
  • PLOT4ai - Privacy Library Of Threats 4 Artificial Intelligence A threat modeling library to help you build responsible AI
  • TenSEAL - A library for doing homomorphic encryption operations on tensors
  • SyMPC - A Secure Multiparty Computation companion library for Syft
  • PyVertical - Privacy Preserving Vertical Federated Learning
  • Cloaked AI - Open source property-preserving encryption for vector embeddings

Resources for learning

Uncategorized useful resources

Research Papers

Adversarial examples and attacks

Model extraction

Evasion

Poisoning

Privacy

Injection

Other research papers

About

A collection of awesome resources related AI security

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published