-
Notifications
You must be signed in to change notification settings - Fork 19
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Segment fault test case. #18
Conversation
To avoid false alarms, can anyone confirm if this is a valid test case?
Result:
|
https://github.com/oxen-io/libsession-util/blob/dev/src/config.cpp#L77 With the above test case, https://github.com/oxen-io/libsession-util/blob/dev/src/config.cpp#L105 |
This hack solves the crash for me and pass all the tests, but I'm not sure if it's the right fix.
|
It isn't valid data, but it definitely is a bug that we aren't detecting that (which is then causing this segfault because the code is assuming the input is valid). The invalidity here is caused by the empty set: those are supposed to be pruned when serializing.
|
We prune empty dicts/sets, but were failing to detect input data with an unpruned dict or set, which then caused a segfault where we were assuming that sets/dicts will always be non-empty. This commit adds proper detection and failure if we encounter such a case.
Thanks for the PR and investigating this! I've pushed a fix into this PR that fixes your reported test case (which should be an exception raised for being an invalid config, definitely not a segfault). Your proposed fix did address it, but isn't really what we want because if some external client is producing invalid data, we want to reject it so that we always produce identical serialization for the same input (in this case: we consider a missing set and an empty set to be exactly equivalent data). The same issue was there for an empty dict, which this now also tests and catches. |
Thanks for taking care of this and implementing a proper fix! |
Thanks for the explanation, that inspires me to think about some more useful patterns for property-based testing. I'll update my findings if I found new interesting things. |
To extend the rationale a little: the reason we need identical output is so that de-duplication happens at the swarm level, so that if two different clients upload the same content (for example, the result of both trying to resolve a previous conflict) they end up uploading the exact same thing, which the storage server will de-duplicate and only store once. |
I'm trying to investigate a segment fault triggered by a test case.
I don't understand what's going on yet, just use the CI system to verify my finding.