-
-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Increasing memory usage in update_current_time
#1500
Comments
I was about to open an issue about it. I've also just identified where the possible leak comes from; After few requests, the output is:
After many requests for few minutes it turns to:
The following image show the memory usage evolution of the same application running in the Kubernetes cluster: During the time where the memory usage grew, the application basically received health check requests, which is a simple get that returns a simple json saying that the application is alive; Environment (please complete the following information):
|
I would take a wild guess that Python's GC is not collecting the hanging futures inside each function call to itself - but I might be wrong. IMO, we don't need to update the clock every second; we just need to call |
Might be. According to the comment it was supposed to mitigate the possible overhead but I'm not sure how much it helps.
|
Yep, but I think this is unnecessary since calling import timeit
setup_stmt = """
from time import time
def call_time():
assert time() is not None
"""
timed_stmt = {
"call_time": 'call_time()',
}
times = 1_000_000
for name, stmt in timed_stmt.items():
time_to_run = timeit.timeit(stmt, setup=setup_stmt, number=times)
time_each = time_to_run / times
print("%s: %d results in %f s (%f us per result, %f per sec)" %
(name, times, time_to_run, time_each * times, 1. / time_each))
|
I can confirm that removing the Removing I can make a PR if this sounds like the proper way to go forward? |
Great, thanks for resolving it so quickly! |
@FUSAKLA Look for two releases for |
Thanks for sharing |
I had the same issue with memory leak for 18.12.1 , and then I have updated to 19.3 ,but the memory is still increasing in a very slow speed. and I use multiple workers, the machine is 4 cores and 6GB memory:
the result when I exec the another question is why so many processes , I think it should be 4 processes ? I use pm2 fork mode , its relation with pm2 ? |
@RifeWang I would look to pm2 first. I am not 100% familiar with it and how it handles python applications. Does it act as a proxy server? How and when does it decide to start a process? To continue this conversation, please start a new issue and link to this here. |
I also experience the memory leak issue when using more than one worker on Sanic 19.6.0 with python 3.7.3 |
@ladler0320 19.6.0 is not out yet, do you mean 19.3? |
@sjsadowski I'm not quite sure, but it's probably the 19.6.0. |
Got it, you installed from git master. Can you share your code and process to determine memory leaks? |
@sjsadowski I do apologize, but I'm not sure I'm allowed to share the code. However, I can confirm that memory leak occurs when I'm using more than one worker and only when the service receives requests. |
@ladler0320 if you can do pseudocode, that would help too. It's hard to know what to fix if we can't identify what and where it's broken. |
@sjsadowski Sure. def init_app():
app_local = Sanic(name="App", log_config=LOGGING_CONFIG)
app_local.config.update(APP_CONFIG) # "RESPONSE_TIMEOUT": 0.5
app_local.error_handler.add(exceptions.ServiceUnavailable, timeout_error_handler)
app_local.add_route(handler, "/models/<model>", methods=["POST"])
return app_local
async def handler(request, model):
loop = asyncio.get_event_loop()
run = loop.run_in_executor
try:
with async_timeout.timeout(0.5, loop=loop):
data = request.json
resp = await run(pool, function, model, data)
status = 200
resp = js.dumps(resp, ensure_ascii=False)
except (concurrent.futures.CancelledError,
concurrent.futures.TimeoutError,
asyncio.TimeoutError):
resp = ""
status = 504
except Exception as e:
logger.info(e)
resp = ""
status = 500
return response.text(resp, status=status)
def timeout_error_handler(request, exception):
return response.text("", 504)
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument("--host", default=os.getenv("HOST", "0.0.0.0"))
parser.add_argument("--port", type=int, default=os.getenv("PORT", 1234))
args = parser.parse_args()
host = args.host
port: int = args.port
workers = int(os.getenv("WORKERS"))
threads = int(os.getenv("THREADS"))
app = init_app()
pool = concurrent.futures.ThreadPoolExecutor(max_workers=threads)
app.run(host=host, port=port, workers=workers, debug=False) |
Facing a similar issue with 20.6.3, Any update on this? |
Update on what? This is closed 2 years. If you are experiencing something, please open a new issue and include some relevant details and source. |
Describe the bug
I see constantly increasing memory usage of my running Sanic instance.
After using tracemalloc to identify cause of this I found top 5 memory allocations
If I look at the
/usr/local/lib/python3.5/dist-packages/sanic/server.py:547
which hassize=16.5 MiB
it leads tohttps://github.com/huge-success/sanic/blob/52deebaf65ab48d5fbfa92607f22d96ee1bdb7a7/sanic/server.py#L607
Memory consumption graph (gaps are OOM kills due to memory limits in Kubernetes)
Could there be an issue with this part of coude in terms of leaking memory?
Environment (please complete the following information):
sanic-18.12.0-py3
Additional context
Sorry I'm not a Python expert, maybe I'm using the tracemalloc the wrong way or the issue s somewhere else. In that case I'm sorry and would be glad for any Ideas where to look or how to profile it.
Thanks in advance
The text was updated successfully, but these errors were encountered: