Skip to content

Exercises to learn how to fuzz with American Fuzzy Lop

License

Notifications You must be signed in to change notification settings

peterc-concordia/afl-training

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

66 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Fuzzing with AFL workshop

Materials of the "Fuzzing with AFL" workshop by Michael Macnair (@michael_macnair).

The first public version of this workshop was presented at SteelCon 2017 and it was revised for BSides London and Bristol 2019.

Pre-requisites

  • 3-4 hours (more to complete all the challenges)
  • Linux machine
  • Basic C and command line experience - ability to modify and compile C programs.
  • Docker, or the dependencies described in quickstart.

Contents

  • quickstart - Do this first! A tiny sample program to get started with fuzzing, including instructions on how to setup your machine.
  • harness - the basics of creating a test harness. Do this if you have any doubts about the "plumbing" between afl-fuzz and the target code.
  • challenges - a set of known-vulnerable programs with fuzzing hints
  • docker - Instructions and Dockerfile for preparing a suitable environment, and hosting it on AWS if you wish. A prebuilt image is on Docker Hub at mykter/afl-training.

See the other READMEs for more information.

Challenges

Challenges, roughly in recommended order, with any specific aspects they cover:

  • libxml2 - an ideal target, using ASAN and persistent mode.
  • heartbleed - infamous bug, using ASAN.
  • sendmail/1301 - parallel fuzzing
  • date - fuzzing environment variable input
  • ntpq - fuzzing a network client; coverage analysis and increasing coverage
  • cyber-grand-challenge - an easy vuln and an example of a hard to find vuln using afl
  • sendmail/1305 - persistent mode difficulties

The challenges have HINTS.md and ANSWERS.md files - these contain useful information about fuzzing different targets even if you're not going to attempt the challenge.

All of the challenges use real vulnerabilities from open source projects (the CVEs are identified in the descriptions), with the exception of the Cyber Grand Challenge extract, which is a synthetic vulnerability.

The chosen bugs are all fairly well isolated, and (except where noted) are very amenable to fuzzing. This means that you should be able to discover the bugs with a relatively small amount of compute time - these won't take core-days, most of them will take core-minutes. That said, fuzz testing is be definition a random process, so there's no guarantee how long it will take to find a particular bug, just a probability distribution.

Slides

Via Google slides and in PowerPoint format. There is extra information in the speaker notes.

Links

About

Exercises to learn how to fuzz with American Fuzzy Lop

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • C 92.5%
  • Dockerfile 3.3%
  • C++ 1.6%
  • Shell 1.5%
  • Makefile 1.1%