-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add Panel and HoloViews benchmark #74
Add Panel and HoloViews benchmark #74
Conversation
Nice!
|
d8a1ed5
to
89ad5f5
Compare
The fix here turned out to be very simple. There are three attributes controlling how many individual benchmarks are run by On my dev machine (M1 mac without dedicated graphics):
This was using @droumis It would be good if you could see if you can run this locally now. |
Also, this was using (in the |
Great work! ❯ asv run -e
· Creating environments
· Discovering benchmarks
· Running 3 total benchmarks (1 commits * 1 environments * 3 benchmarks)
[ 0.00%] · For hvneuro commit 519ee3c2 <main>:
[ 0.00%] ·· Benchmarking virtualenv-py3.11-playwright
[ 20.00%] ··· Running (panel_holoviews_example.PanelHoloviewsExample.time_latency--)..
[ 60.00%] ··· Running (panel_holoviews_example.PanelHoloviewsExample.time_latency--)..
[ 73.33%] ··· Running (bokeh_example.BokehExampleZoom.time_zoom--).
[ 80.00%] ··· Running (panel_holoviews_example.PanelHoloviewsExample.time_latency--).
[ 86.67%] ··· bokeh_example.BokehExampleLatency.time_latency ok
[ 86.67%] ··· ========= ============ ============
-- output_backend
--------- -------------------------
n canvas webgl
========= ============ ============
1000 60.6±0.9ms 71.8±7ms
10000 80.7±5ms 95.2±5ms
100000 217±2ms 222±4ms
1000000 1.84±0.01s 1.80±0.01s
========= ============ ============
[ 93.33%] ··· bokeh_example.BokehExampleZoom.time_zoom ok
[ 93.33%] ··· ========= =========== ==========
-- output_backend
--------- ----------------------
n canvas webgl
========= =========== ==========
1000 51.9±3ms 42.7±3ms
10000 41.5±10ms 43.6±7ms
100000 75.6±2ms 71.9±3ms
1000000 297±4ms 242±4ms
========= =========== ==========
[100.00%] ··· panel_holoviews_example.PanelHoloviewsExample.time_latency ok
[100.00%] ··· ========= ============ ============
-- output_backend
--------- -------------------------
n canvas webgl
========= ============ ============
1000 108±10ms 112±10ms
10000 122±4ms 133±6ms
100000 235±6ms 253±5ms
1000000 1.71±0.03s 1.68±0.01s
========= ============ ============ |
Here are some thoughts on debugging benchmarks. So far when things go wrong it is usually due to communications or timeout issues, and everything just freezes making it difficult to debug. What I do is limit the set of self._browser = playwright.chromium.launch(headless=True) into self._browser = playwright.chromium.launch(headless=False) Then run the benchmark in quick mode, e.g. something like |
This builds on top of #73 and will need to be rebased against
main
after that PR is merged.It adds a benchmark of a Panel and HoloViews example that was supplied by @droumis and @philippjfr. It runs fine provided each benchmark is only run once (using the
-q
flag), i.e.asv run -e -b Panel -q
.Running each benchmark multiple times in the normal manner (
asv run -e -b Panel
) gives the following error:which I think implies that the Bokeh/Panel/Tornado servers are not restarting as I intended between multiple repeats of the same benchmark.
I will continue with this next week.