Welcome to the Generative AI Red Teaming Theory repository! This repository contains information, presentations, and resources related to the integration of generative AI in red teaming practices within cybersecurity.
Generative AI Red Teaming combines traditional red teaming methodologies with advanced generative AI techniques to enhance the simulation of cyberattacks, identify vulnerabilities, and improve security measures.
You can view and download the presentation slides here: Generative AI Red Teaming Presentation
- Generative Red Teaming: An approach that utilizes AI to simulate sophisticated attacks.
- Adversarial Machine Learning: Techniques used to test the robustness of AI models against adversarial attacks.
- Attack Simulations: Use of AI to create automated and realistic attack scenarios.
-
Adversarial Robustness Toolbox (ART)
Adi, Y., Schwartz, R., Malkin, T., & Shalev-Shwartz, S. (2018). Towards Evaluating the Robustness of Neural Networks. GitHub Repository. -
Cybersecurity Frameworks and Methodologies
NIST (National Institute of Standards and Technology). (2018). Framework for Improving Critical Infrastructure Cybersecurity. NIST Framework.
Contributions are welcome! If you have suggestions for improvements or additional resources, please feel free to submit a pull request or open an issue.