-
Notifications
You must be signed in to change notification settings - Fork 588
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Buffer size assert failure #1665
Comments
Can you uncomment DEBUGTAG in GdbConnection.cc and dump the resulting logs somewhere? |
I assume you want the output during |
@rocallahan The log is here. The first file is the last ~200 lines of the second one, which is the 24M full log, including all of the very long write_flush calls before the assert triggers. |
I have no idea what gdb is doing here. Can you find out what symbol address 0x4c3fe2e is for? |
and 0x4c44fd8 |
I wonder if it's some weird auto-loading macro stuff. You might try disabling that to see if it makes a difference. |
Is there a way to figure that out without replaying it for another 2 hours? |
No, so I guess it's probably not worth it. I'll try to figure out a way to make gdb generate such large requests. If I can, I'll try just implementing dynamic allocation, which should fix your problems. |
Also Both |
As a context, this is running a test for array operations in julia (with a JIT). I have some other processes running the same test (since the problem is not very reproducible) and the
and
ASLR is on so they are not the same but it seems that both address always fall in the heap address range. |
OK thanks. FWIW if you're having a difficulty reproducing a bug, try rr chaos mode. |
Ha, I didn't know that (seems new?) THX, will definitely give it a try next time. |
(Just realized that the assertion I hit is the one for write instead of the one I linked above .....) |
I can't get my gdb (7.10.1-30.fc23) to make such large requests. It always breaks memory reads/writes up into 8192 bytes or less. |
I've changed the assertion to a sleep and got a backtrace of gdb.
And I believe this happens in the JIT debugging iterface http://docs.adacore.com/gdb-docs/html/gdb_30.html |
I couldn't figure out how to check the process being debugged directly ( The in memory object file starts at I'm not sure how can this be generated in a test (without a JIT compiler). This particular interface is apparently not too well tested and a little buggy so I'll not be surprised that it doesn't use the same optimizations used in other code path. I'll keep the sleeping processes open in case more info is needed. (I can't do much other than dumping memory though....). I'll also appreciate if there's some temporary workaround before this is properly fixed so that I can finish the replay.... (edit: I'll just try to increase the size of |
Dump of all the headers
I'm guessing registering (a copy of) a real object file through the JIT debugging interface can probably show this behavior. |
Thanks for explaining that. I think I'll just make these buffers dynamic; I can test that locally even if I can't reproduce the original problem. |
This was caused by the call to jl_has_intrinsics in the LambdaInfo constructor, which looked up symbols inside the function too soon.
I've repeatedly hit this assertion failure in
GdbConnection::write_data_raw
when trying to debug a segfault.The message suggests that this is because the static buffer size is too small?
Platform: ArchLinux x64, rr version 11a08c0
The text was updated successfully, but these errors were encountered: