Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ASSERT (ARM ChromeOS) in memcache_query_memory #1728

Open
peterpengwei opened this issue Jul 8, 2015 · 5 comments
Open

ASSERT (ARM ChromeOS) in memcache_query_memory #1728

peterpengwei opened this issue Jul 8, 2015 · 5 comments

Comments

@peterpengwei
Copy link

When running a telemetry benchmark program, roi.maps, in DynamoRIO debug mode on a ARM-based Chromebook with ChromeOS, an assertion error is triggered in the very beginning. The failed assertion is in Function memcache_query_memory(...) at $DYNAMORIO_HOME/core/unix/memcache.c, and the detailed description is:

<all_memory_areas is missing region 0xbea02000-0xbea58000!>
<Application /usr/local/bin/python2.7 (4845).  Internal Error: DynamoRIO debug check failure: $DYNAMORIO_HOME/core/unix/memcache.c:443 false
(Error occurred @29777 frags)
version 5.0.16624, custom build
-no_dynamic_options -code_api -stack_size 56K -max_elide_jmp 0 -max_elide_call 0 -early_inject -emulate_brk -no_inline_ignored_syscalls -native_exec_default_list '' -no_native_exec_managed_code -no_indcall2direct 
0x710d7031 0x78af04b0>
@derekbruening derekbruening changed the title CRASH (ARM ChromeOS) in memcache_query_memory ASSERT (ARM ChromeOS) in memcache_query_memory Jul 8, 2015
@zhaoqin
Copy link
Contributor

zhaoqin commented Jul 9, 2015

I can reproduce this assert, seem might be a Linux kernel bug.

At start of the execution, part of the memory map:
b6fe5000-b6fe7000 rw-p 00000000 b3:01 520431 /usr/local/bin/python2.7
b6fe7000-b6fe8000 ---p 00000000 00:00 0
b6fe8000-b6fe9000 rw-p 00000000 00:00 0
bed13000-bed34000 rw-p 00000000 00:00 0 [stack]
ffff0000-ffff1000 r-xp 00000000 00:00 0 [vectors]

later, the [stack] label is removed:
b6fe5000-b6fe7000 rw-p 00000000 b3:01 520431 /usr/local/bin/python2.7
b6fe7000-b6fe8000 ---p 00000000 00:00 0
b6fe8000-b6fe9000 rw-p 00000000 00:00 0
bed13000-bed34000 rw-p 00000000 00:00 0
ffff0000-ffff1000 r-xp 00000000 00:00 0 [vectors]

In the middle of execution, as we can see, some of the memory region is listed twice in the /proc/pid/maps file:
b6fe5000-b6fe6000 r--p 00000000 b3:01 520431 /usr/local/bin/python2.7
b6fe6000-b6fe7000 rw-p 00001000 b3:01 520431 /usr/local/bin/python2.7
b6fe7000-b6fe8000 ---p 00000000 00:00 0
b6fe8000-b7009000 rw-p 00000000 00:00 0
bed13000-bed34000 rw-p 00000000 00:00 0
ffff0000-ffff1000 r-xp 00000000 00:00 0 [vectors]
b6fe7000-b6fe8000 ---p 00000000 00:00 0
b6fe8000-b7009000 rw-p 00000000 00:00 0
bed13000-bed34000 rw-p 00000000 00:00 0
ffff0000-ffff1000 r-xp 00000000 00:00 0 [vectors]

And on assert, the duplicated memory region is gone.
b6fe5000-b6fe6000 r--p 00000000 b3:01 520431 /usr/local/bin/python2.7
b6fe6000-b6fe7000 rw-p 00001000 b3:01 520431 /usr/local/bin/python2.7
b6fe7000-b6fe8000 ---p 00000000 00:00 0
b6fe8000-b715f000 rw-p 00000000 00:00 0
bed13000-bed34000 rw-p 00000000 00:00 0
ffff0000-ffff1000 r-xp 00000000 00:00 0 [vectors]

<all_memory_areas is missing region 0xbecde000-0xbed34000!>
the memory region causing trouble is the one used to be , then being duplicated, and now gone.

From DR log, I did not see any syscall to unmap that memory.

@zhaoqin
Copy link
Contributor

zhaoqin commented Jul 9, 2015

Assert callstack:

#1  0x712132d0 in read_syscall (fd=0, buf=0x4f11e3b7, nbytes=1)
    at /usr/local/google/home/zhaoqin/Workspace/DynamoRIO/dynamorio.git/core/unix/os.c:3538
#2  0x71213a0a in os_read (f=0, buf=0x4f11e3b7, count=1)
    at /usr/local/google/home/zhaoqin/Workspace/DynamoRIO/dynamorio.git/core/unix/os.c:3785
#3  0x710ed97e in notify (priority=SYSLOG_ERROR, internal=true, synch=true,
    substitution_num=0, prefix=0x712c9524 "SYSLOG_ERROR",
    fmt=0x712c9534 "all_memory_areas is missing region 0x%08x-0x%08x!")
    at /usr/local/google/home/zhaoqin/Workspace/DynamoRIO/dynamorio.git/core/utils.c:1976
#4  0x71233640 in memcache_query_memory (pc=0xbecfc004 "", out_info=0x4f11ec60)
    at /usr/local/google/home/zhaoqin/Workspace/DynamoRIO/dynamorio.git/core/unix/memcache.c:440
#5  0x7121ce16 in query_memory_ex (pc=0xbecfc004 "", out_info=0x4f11ec60)
    at /usr/local/google/home/zhaoqin/Workspace/DynamoRIO/dynamorio.git/core/unix/os.c:8235
#6  0x7121ce86 in get_memory_info (pc=0xbecfc004 "", base_pc=0x4f121024, size=0x4f11ec94,
    prot=0x0)
    at /usr/local/google/home/zhaoqin/Workspace/DynamoRIO/dynamorio.git/core/unix/os.c:8257
#7  0x7121ccfa in get_stack_bounds (dcontext=0x4f0dbfe0, base=0x4f11ece0, top=0x4f11ecdc)
    at /usr/local/google/home/zhaoqin/Workspace/DynamoRIO/dynamorio.git/core/unix/os.c:8185
#8  0x7116e5f2 in is_on_stack (dcontext=0x4f0dbfe0, pc=0xb6bf4000 "", area=0x0)
    at /usr/local/google/home/zhaoqin/Workspace/DynamoRIO/dynamorio.git/core/vmareas.c:3820
#9  0x7116f8e0 in app_memory_allocation (dcontext=0x4f0dbfe0, base=0xb6bf4000 "",
    size=4096, prot=7, image=false, comment=0x712c3380 "mmap")
    at /usr/local/google/home/zhaoqin/Workspace/DynamoRIO/dynamorio.git/core/vmareas.c:6054
#10 0x7121a68c in process_mmap (dcontext=0x4f0dbfe0, base=0xb6bf4000 "", size=4096,
    prot=7, flags=34, map_type=0x712c3380 "mmap")
    at /usr/local/google/home/zhaoqin/Workspace/DynamoRIO/dynamorio.git/core/unix/os.c:7035
#11 0x7121af32 in post_system_call (dcontext=0x4f0dbfe0)
    at /usr/local/google/home/zhaoqin/Workspace/DynamoRIO/dynamorio.git/core/unix/os.c:7257
#12 0x710e0668 in handle_post_system_call (dcontext=0x4f0dbfe0)
    at /usr/local/google/home/zhaoqin/Workspace/DynamoRIO/dynamorio.git/core/dispatch.c:2015
#13 0x710d9d8c in dispatch_enter_dynamorio (dcontext=0x4f0dbfe0)
    at /usr/local/google/home/zhaoqin/Workspace/DynamoRIO/dynamorio.git/core/dispatch.c:796
#14 0x710d7076 in dispatch (dcontext=0x4f0dbfe0)
    at /usr/local/google/home/zhaoqin/Workspace/DynamoRIO/dynamorio.git/core/dispatch.c:154
#15 0xb668b0c4 in ?? ()

@derekbruening
Copy link
Contributor

Xref weird empty regions in the maps file we had to work around in the past: #599

@zhaoqin
Copy link
Contributor

zhaoqin commented Jul 24, 2015

The racy read/write of the /proc/pid/maps by user program and the kernel seems to be the cause of the problem. We need adjust our code to tolerant the inconsistency caused by racy access.

@derekbruening
Copy link
Contributor

The maps iterator already has some code to handle racy changes of the file underneath it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants