-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update to LibAFL 0.13.2 #5
Conversation
Thanks for your efforts so far @domenukk, much appreciated! |
Interestingly, despite being allowed to edit this pull request GitHub does not allow me to push. Not from local machine
and not even from codespaces.. I'll dive into that |
Might have been related to non-existing submodules as we use relative urls for those. I fixed the urls to be correct in forks as well in #6 |
This reverts commit 8c39a2f.
Note to other |
Two findings so far:
The typo will be fixed in AFLplusplus/LibAFL#2515. Those two issues are probably related, TODO: check harness. EDIT: 1. seems resolved now. For 2. we somehow reproducibly have no coverage at all for precisely the 4th input that is processed (tested on two different targets). |
… good. Modify logging to reflect this behaviour
Played around a bit, including modifying the As a side note, it would be better to get rid of this DummyCoverageClient and fallback to EndpointCoverageClient when there is no instrumentation of the target (black box mode). Still to check if the fuzzing also fails when we use e.g. the LCOV client. If that does work, I would suggest to refactor and ensure that we use the EndpointCoverageClient as a fallback in blackbox mode. |
Is this in (new) LibAFL code? |
It seems like it happens in the LibAFL code. I am not sure if it is 'new' as the check on which it fails is introduced later. It could well be the case that this is also not working properly in the current version of WuppieFuzz that is based on LibAFL 0.11.2. That would require some additional testing. In the meantime I tried a java target using code coverage coverage, same behaviour. For testing purposes, I created the following example: Steps to reproduce: docker compose up
cargo run fuzz --log-level debug --timeout 60 --coverage-format jacoco <path to petstore_openapi.yaml> Edit: I just noticed that at least in the logging the coverage also hangs on 'unknown' in v1.0.0. @grebnetiew, something to explore |
Ok, maybe this does not happen in the LibAFL code. Let's find out why the |
This is resolved in #9 and also in this branch. So we're back at an issue in the LibAFL part, I guess. |
What's the buggy behavior? LibAFL emptying out observers before the feedback has been fully processed? |
That seems to be the case indeed. I expect this goes wrong in the To my surprise
is already a map of pure zeroes. |
Ok, some progress.. This behaviour is not new. On the main branch I spot similar behaviour, where Still I have no clue as to why the map is empty in this part
@domenukk, any thoughts? |
I mean, the only time LibAFL resets the observer map is in |
Line 264 in 7b47203
Working theory: This will clone the inner |
@ThomasTNO can you try if ab53217 helps at all? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it looks very good! Very happy about the new stopping mechanism. I made some small changes:
- refactored some of the new unwraps since they could be statically guaranteed
- add a multimap observer to let the scheduler see both endpoint coverage and line coverage, and turn the Dummy client back to doing nothing (but now with a size-1 map)
I have't tested this yet on the usual targets.
Oh actually, though it cargo checks, it doesn't build!
That's a new one for me :) Not sure I just broke it, either. (edit: I did! fortunately the hint was accurate) |
that error comes from this. can you call track_indices() on your combined observer? |
TODO
|
@grebnetiew, just a quick test run:
Black box mode
The latter is fixed in bd676f7 |
|
||
// A fuzzer with feedbacks and a corpus scheduler | ||
let mut fuzzer = StdFuzzer::new(scheduler, collective_feedback, objective); | ||
|
||
let collective_observer = tuple_list!(endpoint_observer, coverage_observer, time_observer); | ||
let collective_observer = tuple_list!( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am not sure what you exactly tried to achieve. But The cause of the missing key lies here, I believe.
The combined_map_observer
is not present in this list, which is later used to get the all_maps
entry from.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In case this was a question for me, I didn't change this :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nope, this is a question for @grebnetiew :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I tried to leave this the same, apart from a rename.
As far as I understand (but I might not :)), the MultiMapObserver internally points to the same map as the other observer, and I would expect all_maps
to eventually get the same maps from the tuple-list as from the multimap.
I'll check my assumptions shortly ;)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have attempted to fix the problem by (inlining the creation of the multimap observer because naming its type is prohibitively time consuming, and) using the multimap observer also in the tuple list. Does this fix the missing key problem you speak of @ThomasTNO ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is the behaviour I observe
thread 'main' panicked at /home/thomas/.cargo/registry/src/index.crates.io-6f17d22bba15001f/libafl-0.13.2/src/stages/calibrate.rs:156:34:
Could not find entry matching Handle { name: "code_coverage", type: "libafl::observers::map::ExplicitTracking<libafl::observers::map::StdMapObserver<u8, false>, true, true>" }
stack backtrace:
0: rust_begin_unwind
at /rustc/051478957371ee0084a7c0913941d2a8c4757bb9/library/std/src/panicking.rs:652:5
1: core::panicking::panic_fmt
at /rustc/051478957371ee0084a7c0913941d2a8c4757bb9/library/core/src/panicking.rs:72:14
2: <libafl_bolts::tuples::RefIndexable<RM,M> as core::ops::index::Index<&libafl_bolts::tuples::Handle<T>>>::index
at /home/thomas/.cargo/registry/src/index.crates.io-6f17d22bba15001f/libafl_bolts-0.13.2/src/tuples.rs:638:13
3: <libafl::stages::calibrate::CalibrationStage<C,E,O,OT> as libafl::stages::Stage<E,EM,Z>>::perform
at /home/thomas/.cargo/registry/src/index.crates.io-6f17d22bba15001f/libafl-0.13.2/src/stages/calibrate.rs:156:34
4: libafl::stages::Stage::perform_restartable
at /home/thomas/.cargo/registry/src/index.crates.io-6f17d22bba15001f/libafl-0.13.2/src/stages/mod.rs:127:13
5: <(Head,Tail) as libafl::stages::StagesTuple<E,EM,<Head as libafl::state::UsesState>::State,Z>>::perform_all
at /home/thomas/.cargo/registry/src/index.crates.io-6f17d22bba15001f/libafl-0.13.2/src/stages/mod.rs:214:17
6: <libafl::fuzzer::StdFuzzer<CS,F,OF> as libafl::fuzzer::Fuzzer<E,EM,ST>>::fuzz_one
at /home/thomas/.cargo/registry/src/index.crates.io-6f17d22bba15001f/libafl-0.13.2/src/fuzzer/mod.rs:804:9
7: wuppiefuzz::fuzzer::fuzz
at ./src/fuzzer.rs:334:9
8: wuppiefuzz::main
at ./src/main.rs:87:34
9: core::ops::function::FnOnce::call_once
at /rustc/051478957371ee0084a7c0913941d2a8c4757bb9/library/core/src/ops/function.rs:250:5
We are now missing a different map ;)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Possibly we are misinterpretating the working of the MultiMapOberserver
..
a94cb25 seems to make it all operational.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Currently JaCoCo coverage seems to work fine. Blackbox mode is operational. LCOV coverage seems to fail (tested on Python). I suspect an existing issue though. Possibly the coverage agent is too fast / slow compared to the coverage gathering resulting in an empty map.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LCOV coverage seems to fail (tested on Python)
This was a lie, I accidentally used a pre-release, outdated version of WuppieFuzz-Python. With the latest version it does work. Will post a reproducible setup soon.
LCOV testing setup:
|
Thanks for the help @domenukk, v1.1.0 has just been released including the update to LibAFL 0.13.2. Let's stay up to date from now on :) |
Happy to help, keep up the good work! :) |
This is a quick and dirty update to LibAFL 0.13.2.
I have not tested this at all - some behaviour might have changed. Use at your own risk.
However, it does build, so it may be a good starting point.
I left one TODO in there in fuzzer.rs: