Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cache last input and accessed index for faster lookups #34

Closed
chrispcampbell opened this issue Sep 10, 2020 · 1 comment · Fixed by #43 or #190
Closed

Cache last input and accessed index for faster lookups #34

chrispcampbell opened this issue Sep 10, 2020 · 1 comment · Fixed by #43 or #190
Assignees
Labels
Milestone

Comments

@chrispcampbell
Copy link
Contributor

The LOOKUP function operates on (x,y) pairs where the x value is assumed to be monotonically increasing. The way lookups are currently used, the input value tends to be a "marching time" value, meaning that it increases from initial time to final time, and then back to initial time and so on.

Two observations:

  1. In the case of the En-ROADS model, the time step is 0.125 years, but most lookup data has a granularity of one year. This means that much of the time, each lookup is called 8 times with the same input and same expected return value, before moving up to the next year.

  2. Currently the LOOKUP function uses a simple linear search that always begins at zero.

We can improve the speed of lookups and address both of the above by caching the last input and the last-accessed index value in the Lookup itself. If the current input value is greater than or equal to the last input value, then we can set our start index to the last index value. With this approach we would generally never have to go more than one index ahead, and only once the time variable goes back to initial would we need to set the start index back to zero.

@chrispcampbell chrispcampbell self-assigned this Sep 10, 2020
@chrispcampbell chrispcampbell added this to the 0.6.0 milestone Sep 10, 2020
@chrispcampbell
Copy link
Contributor Author

Here are the numbers for this change relative to other earlier performance work (using the En-ROADS model as a real world test case).

Performance

MacBook Pro (2019) | 2.4 GHz 8-core i9, 32 GB RAM, macOS 10.15

Safari:

Issue C run (ms) Wasm run (ms) Wasm init (ms) JS mem (MB) Page mem (MB)
baseline 45.8 87.5 38.0 94 685
SDE 18 46.0 85.6 18.0 39 672
SDE 19 42.8 49.4 15.0 38 25
SDE 22 34.8 44.8 15.0 38 21
SDE 23 32.7 42.8 13.0 38 32
SDE 24 26.6 38.2 13.0 39 26
SDE 7 * 24.4 34.8 10.0 39 20
En-ROADS 640 n/a 47.0 12.0 37 29
SDE 34 20.3 45.5 12.0 x x

Chrome:

Issue Wasm run (ms) Wasm init (ms)
SDE 24 41.6 14.1
En-ROADS 640 59.1 9.1
SDE 34 52.7 10.8

Firefox:

Issue Wasm run (ms) Wasm init (ms)
SDE 24 41.3 19.0
En-ROADS 640 63.7 14.0
SDE 34 56.9 14.0

iPhone 8 | A11, iOS 13

Issue C run (ms) Wasm run (ms) Wasm init (ms) JS mem (MB) Page mem (MB)
baseline 39.9 187.0 165.0 39 645
SDE 18 40.3 219.0 86.0 38 724
SDE 19 40.1 81.6 83.0 38 41
SDE 22 35.5 74.6 86.0 40 40
SDE 23 31.1 73.6 82.0 41 39
SDE 24 28.5 71.6 82.0 40 38
SDE 7 * 28.7 70.0 70.0 36 36
En-ROADS 640 n/a 57.1 48.0 38 39
SDE 34 22.3 50.6 60.0 x x

iPad Air (2013) | A7, iOS 12

Issue C run (ms) Wasm run (ms) Wasm init (ms) JS mem (MB) Page mem (MB)
baseline 151.0 1372.2 30146.0 77 331
SDE 18 166.0 1408.0 4416.0 42 395
SDE 19 151.0 837.6 1291.0 45 41
SDE 22 137.0 771.6 1484.0 44 40
SDE 23 110.1 642.2 1148.0 44 41
SDE 24 111.8 638.4 1236.0 45 37
SDE 7 * 91.7 543.8 1120.0 43 51
En-ROADS 640 n/a 210.7 570.0 41 40
SDE 34 89.9 200.7 606.0 x x

Size

Issue Wasm size (bytes)
baseline 1,084,036
SDE 18 773,968
SDE 19 776,851
SDE 22 737,028
SDE 23 741,668
SDE 24 741,677
SDE 7 * 616,264
En-ROADS 640 466,781
SDE 34 ** 501,731

Legend

Issue Date en-roads-app SDEverywhere Notes
baseline 2020/07/08 91d0918 3002ac9 baseline prior to performance work
SDE 18 2020/07/09 tbd tbd change lookup init to use static arrays
SDE 19 2020/07/09 tbd tbd break large functions into chunked subfunctions
SDE 22 2020/07/10 tbd tbd replace wrapper functions with macros
SDE 23 2020/07/10 tbd tbd replace dimension array access with simple index
SDE 24 2020/07/10 tbd tbd optimize __lookup function
SDE 7 2020/07/10 tbd tbd change from doubles to floats (* experimental, not merged)
En-ROADS 640 2020/07/20 tbd tbd use -Os instead of -O3
SDE 34 2020/09/14 tbd tbd cache input/index in __lookup function (** model upgrade with AQ work accounts for slight increase in size compared to previous numbers)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment