-
Notifications
You must be signed in to change notification settings - Fork 182
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow client-side timeouts on Windows #632
Comments
ProposalTurns out that a Python queue has such a primitive: https://docs.python.org/3/library/queue.html#queue.Queue.get We can use it as follows:
PerformanceUsing a queue and a Python thread certainly adds a non-negligible overhead. On the other hand, the Docker container is doing work between reads, so this overhead may not be noticeable to the user. What we need to answer here is when does this overhead start becoming noticeable, and will it realistically bite us? I've written a synthetic benchmark were we simulate reading from a pipe with Benchmark script#!/bin/python
import queue
import sys
import threading
import time
TEST_DURATION = 1
def main():
if len(sys.argv) != 2:
print("Usage: ./bench.py <delay_ms>")
return 1
delay = float(sys.argv[1]) / 1000
iterations = int(TEST_DURATION / delay)
print(f"Reading with {delay}s delay, iterations: {iterations}")
# Test 1: Common read
print("Normal read")
start = time.monotonic_ns()
for i in range(iterations):
time.sleep(delay)
end = time.monotonic_ns()
total = normal_total = end - start
per_iter = total / iterations
print(f"Total: {total} ns, per iteration: {per_iter} ns")
# Test 2: Read with queue
print("Enqueued read")
q = queue.Queue()
def enqueued_read():
for i in range(iterations):
time.sleep(delay)
q.put(9)
t = threading.Thread(target=enqueued_read)
start = time.monotonic_ns()
t.start()
for i in range(iterations):
q.get(block=True, timeout=2)
end = time.monotonic_ns()
total = end - start
per_iter = total / iterations
print(f"Total: {total} ns, per iteration: {per_iter} ns")
slowdown = round((1 - normal_total / total) * 100, 2)
print(f"Slowdown: {slowdown}%")
if __name__ == "__main__":
sys.exit(main()) The results are the following:
What we see in this table (ignoring the weird result for 0.1 ms delay) is that the queue overhead starts to become noticeable (> 1% slowdown) once the work that the container has to do per page is < 5 ms (at least in my computer). Empirically, |
Removing this because timeouts were fully removed. |
In order to add streaming support in Windows for the first stage of the conversion (see #443, #627), we need to read the stdout of the container from the host. This is currently implemented in Qubes with this method, which uses non-blocking reads under the hood:
dangerzone/dangerzone/isolation_provider/qubes.py
Lines 40 to 45 in 6876fa5
The reason we use non-blocking reads is to stop the execution if the server component does not send all the necessary bytes within the timeout period. The problem here is that our non-blocking read implementation does not pass tests on Windows platforms:
dangerzone/tests/test_util.py
Lines 40 to 43 in 6876fa5
because non-blocking reads from pipes do not work on Windows:
(taken from the
select
module)So, we need to somehow support client-side timeouts on Windows, without relying on non-blocking reads. Basically, we need a primitive that offers blocking operations with a timeout, that works on Windows as well.
The text was updated successfully, but these errors were encountered: