-
Notifications
You must be signed in to change notification settings - Fork 10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unsubscribing from server node in UA Expert / UPC UA Client causes segmentation fault #125
Comments
The Just wanted to add this as I just saw it during debugging my own changes. |
Thank you, I was able to reproduce the issue. I'll start working on a fix. |
It seems that our destructor callback is called for unrelated nodes that have |
For reference, I couldn't figure out where the unexpected |
## Description This PR is a workaround for #125 to prevent segfaults. See that issue for more details and further investigations to properly solve the underlying problem. The changes leak memory: associated data for created nodes is not freed. As long as the server's node tree is static, this is not a problem. But we must be aware that repeated node creation and destruction during the server's runtime will fill up the available heap memory.
I think I can help you a bit with the cause of this issue. In our case we are creating the server with function In our case (with home-grown bindings to open62541) it was easier to notice because we are using original open62541 logger and I observed logs being present in the code but not in the output, also the crash in our case was happening in the logger. So I started with debugging the logger and found missing logs to be called with context pointing to NULL ( Once I realised this must be use after free, I allocated the config on the heap and leaked it ( I suppose your issue might have similar cause, maybe it's dangling pointer from config as well? In that case leaking the config and reverting #137 shouldn't crash. If it still does that will likely be a different dangling pointer. |
@mati865 Thank you for your input, it is greatly appreciated. I gave your suggestion a try; I reactivated the call to Unfortunately, it did not solve the issue. I had looked deeper into the issue; it turns out we're trying to free pointers that we have not put into the So this is not so much a matter of dangling pointers, but trying to use data (and interpret it as a pointer to a data structure) that does not match our expectations. We will still need to keep an eye open for the missing deep copies in Footnotes
|
You probably meant Anyway, good to know. I think you might encounter issues with dangling pointers from the config if you configure security and certificates, but I debugged it a long time ago, so I don't remember the details. |
Yes, obviously. Sorry for the misleading typo 😓 |
I just found a bug, which is either caused by the open62541 server itself or this crate. As the segmentation fault happens in the crate's code, this is probably the right place.
How to reproduce the error?
If you add multiple subscriptions to the node, then only the last removal will cause the segmentation fault.
I already spent a while debugging the issue. The segmentation fault is caused in
src/Server.rs > build > destructor_c
. The log message will be printed with the node id, and then the segmentation fault happens, likely by theNodeContext::consume(node_context)
call.In the attached log, you can see that before the Rust code runs, the
DeleteSubscriptionsRequest
gets processed. I looked in the open62541 source and found out, that the memory pointed to by the node pointer gets freed, but it seems that the pointer isn't set to null. This probably causes the Rust code to think that the pointer isn't null and then it tries to free the memory a second time.Temporary solution: Remove the
NodeContext::consume(...)
. This gets rid of the segmentation fault, but may cause a memory leak somewhere else.important_part_of_the_log.txt
The text was updated successfully, but these errors were encountered: