Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

InProcessForkExecutor #82

Closed
andreafioraldi opened this issue May 3, 2021 · 20 comments
Closed

InProcessForkExecutor #82

andreafioraldi opened this issue May 3, 2021 · 20 comments
Labels
enhancement New feature or request good first issue Good for newcomers

Comments

@andreafioraldi
Copy link
Member

ATM these two Executors are missing:

  • ForkserverExecutor must be an AFL-like forkserver executor, I guess we can borrow code from Angora
  • InProcessForkExecutor is a version of InProcessExecutor that forks before calling the harness. In this case, LibAFL must be still embedded into the target and we avoid to control the target via pipe, but then we still need an harness and cannot fuzz binaries compile with afl-cc
@andreafioraldi andreafioraldi added enhancement New feature or request good first issue Good for newcomers labels May 3, 2021
@tokatoka
Copy link
Member

tokatoka commented May 7, 2021

Are we gonna go with injecting fork-server assembly code into instrumented PUT?
(https://github.com/google/AFL/blob/master/afl-as.h#L252)
, or do we prefer to use #[ctor] to spin up a forkserver before main() (it seems Angora is doing in this way).
The latter way seems much simpler, though.

@andreafioraldi
Copy link
Member Author

For ForkserverExecutor this is an implementation detail of the instrumentation backend, has nothing to do with the lib.
For InProcessForkExecutor, it is simply an InProcessExecutor-like executor that instead of calling the harness function forks, call the harness in the child and waitpid in the parent

@andreafioraldi
Copy link
Member Author

andreafioraldi commented May 7, 2021

The immediate goal of ForkserverExecutor is btw to execute AFL++ binaries compiled with afl-clang-fast

@domenukk
Copy link
Member

domenukk commented May 7, 2021

Basically this:
https://github.com/AFLplusplus/LibAFL-legacy/blob/53e339a38e27bcf5decf892928df459904442498/src/aflpp.c#L98

Or the "real" afl++ version, that also add sharedmem input, here:
https://github.com/AFLplusplus/AFLplusplus/blob/stable/src/afl-forkserver.c

(it doesn't have to fork itself but can use a rust std::process::Command to execute the target)

/edit:
The Angora forkserver is here (but won't work 1:1 as it uses Unix sockets instead of pipes):
https://github.com/AngoraFuzzer/Angora/blob/3cedcac8e65595cd2cdd950b60f654c93cf8cc2e/fuzzer/src/executor/forksrv.rs#L29

@tokatoka
Copy link
Member

tokatoka commented May 9, 2021

I've pushed a piece of very WIP code to the forkserver branch.
We'll also need to take care of the shared memory (and make a observer (ShmemObserver) to monitor the shared map), right?

@domenukk
Copy link
Member

domenukk commented May 9, 2021

Yes, I assume we can use a normal map observer, with the map pointing to an ShMem.
Sharedmap input would eventually also be nice to have, as it's a lot quicker than (ab)using the file system

/edit: looking at your code, it might be a good idea to implement Drop for the Pipe struct so that they out close when going out of scope :)

@tokatoka
Copy link
Member

I'll use the Pipe struct you've pushed to the launcher branch

bitterbit added a commit to bitterbit/fuzzer-qemu that referenced this issue May 13, 2021
@domenukk
Copy link
Member

@tokatoka want to create a PR for the current state, so we can finish it up together?

@tokatoka
Copy link
Member

tokatoka commented May 19, 2021

Yes, I was kind of waiting for the launcher's Pipe class to be merged into the main branch.
The one last thing left to do for ForkserverExecutor is to check the execution result. (check if the fuzzed program has crashed or not)

@andreafioraldi
Copy link
Member Author

Good job

@tokatoka tokatoka mentioned this issue May 19, 2021
4 tasks
@tokatoka
Copy link
Member

tokatoka commented Jun 8, 2021

BTW, for what kind of target programs do we prefer to use InProcessForkExecutor over InProcessExecutor?

@domenukk
Copy link
Member

domenukk commented Jun 8, 2021

InProcessForkExecutor would help when the target is instable, as state gets reset every once in a while.
However, a trivial solution already exists: it's possible to

  • Using the restarting mngr
  • running fuzz_loop_for with a low number of iters
  • call manager.on_restart(state)?;, then exiting the child

This still needs some serialization that will cost time, maybe there is a better way (for example, by having large parts of the fuzzer on a shared map, directly?)

@tokatoka
Copy link
Member

tokatoka commented Jun 8, 2021

ok, makes sense.

@tokatoka
Copy link
Member

I think we can call
fd = open("/dev/zero", O_RDWR);
mmap(EDGES_MAP.as_mut_ptr() as *mut c_void, 65536, PROT_READ | PROT_WRITE, MAP_SHARED | MAP_FIXED, fd, 0);
to make the EDGES_MAP shared between the child and the parent

but in this case, we need to make EDGES_MAP aligned to 65536 bytes, so can we make EDGES_MAP into a struct then we can attach #[repr(C, align(65536))]?

and I think we can wrap that mmap in a new function ShMemProvider->new_fixed()?
(not sure what to do for ashmem and windows 🤔 )

@domenukk
Copy link
Member

What's the benefit over using a normal ShMem? I'd just use that if possible, then it works on every os.
Instrumentation will need a way to replace the map pointer (and,potentially, a size field?)

@tokatoka
Copy link
Member

My idea was to remap EDGES_MAP as MAP_SHARED, so that the bitmap is shared between the child and the parent, but normal ShMem does not have a way to remap a memory region as MAP_SHARED.

@domenukk
Copy link
Member

How do you mean? The contents of ShMem are shared just fine, else the llmp wouldn't work

@tokatoka
Copy link
Member

Nevermind, I've misunderstood,
yes I can just directly use ShMem. I'll make a wip pr when it's done

@domenukk domenukk changed the title ForkserverExecutor and InProcessForkExecutor InProcessForkExecutor Jul 12, 2021
@tokatoka
Copy link
Member

tokatoka commented Aug 7, 2021

this can be closed now?

@domenukk
Copy link
Member

domenukk commented Aug 7, 2021

Yes, great work

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request good first issue Good for newcomers
Projects
None yet
Development

No branches or pull requests

3 participants