-
Notifications
You must be signed in to change notification settings - Fork 15
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Spurius benchmark fail in unit tests #1233
Comments
The fact that they are both flat zero for time is super suspicious... |
Indeed, that is incredibly weird. I just ran the test a bunch of times on my machine and consistently get E.g. def test_re_initialization(self):
start = time()
x = 5+1
# ft = FileTable(self.loc1)
first_initialization = time() - start
start = time()
y = 10/3
# ft_reinitialized = FileTable(self.loc1)
second_initialization = time() - start
print([f"{t:.2e}" for t in [
# master_init, m2,
first_initialization, second_initialization
]]) |
Ok, interesting. Actually, if I re-run it with the simple math operations sometimes it comes out as e-6 to e-8, but sometimes it's coming out as completely zero. So under some conditions the filetable instantiation is getting completely trivial, even on the first go? |
This snippet yields false: times = []
for n in range(500):
start = time()
FileTable(self.loc1)
dt = time() - start
times.append(dt)
print(any(t < 1e-8 for t in times)) What's wild is that even if I go over to EDIT: snippet indentation EDIT: changing 500 -> 50000 still doesn't give any zero times, so it's not just sampling |
Sorry @pmrv, I'm at a loss. Your linked failure was windows -- are you always seeing it on the same OS? |
Thanks for testing it out. I hadn't paid attention yet to which OS this appears on, but can keep an eye. Actually it's a good hint, maybe the default timer used by |
I guess I wouldn't be particularly surprised...
Cool, didn't know about that! I can patch the test later today and hopefully that resolves the issue. |
There's a test from the
FileTable
that sometimes fails and sometimes now, with the following error:details
I tend to just restart them until it works and it doesn't come up often, but can that be made more robust? Seems like the actual test is just a bit too fast and the timing becomes degenerate.
The text was updated successfully, but these errors were encountered: