-
Notifications
You must be signed in to change notification settings - Fork 100
Project Meeting 2024.05.02
Michelle Bina edited this page May 7, 2024
·
2 revisions
- Update on Phase 9a Activities
- Jeff to continue working on addressing sharrow issues for other model components:
- non-mandatory tour scheduling
- joint tour scheduling
- CDAP
- joint tour frequency composition
- vehicle allocation
- school escorting
- David to implement an alternative fix to address sharrow issue in the trip destination model (in place of Jeff’s current fix) – a merge operation in the landuse table to be moved to pre-processing step
- Navjyoth to test the SANDAG model run again with and without the blosc compressed skims to identify if the trend in load times and memory usage while loading skims to sharrow shared memory continues
- David to continue working on addressing slow chunking times in the SANDAG model
- Collectively need to investigate higher skim load times while using sharrow compared to before – need to investigate if any previous model component fixes contributed to this.
- Jeff identified the problem in the trip destination model (run with Sharrow)
- A merge operation in the landuse table in the trip destination model, this impacts performance while using sharrow– this operation should be happening in a pre-processor step instead. David to implement this change (instead of Jeff’s current fix)
- It was unanimously decided after discussions that full scale testing on SANDAG model could wait for now, until after other sharrow changes are implemented.
- Sharrow issues to be addressed: Sharrow is currently turned off for non-mandatory tour scheduling, joint tour scheduling, CDAP, joint tour frequency composition, vehicle allocation, vehicle type (already fixed), school escorting
- Writing skims to shared memory – blosc compressed skims load faster compared to zlib → Based on runs by David, Sijia, and Navjyoth.
- Jeff changed to loading it in single thread (sequential). Navjyoth tested a SANDAG ABM model run which (surprisingly) made the loading faster compared to multi-threaded loading
- Some unexplained trends in memory use and load times for loading skims to shared memory.
- Until about 60 GB is loaded, the time taken per skim is very low, but beyond that, there is an increase in the load time, including some very noticeable spikes.
- Jeff created an issue in GitHub to address this
- Navjyoth to run the same test again to check if the same pattern of memory usage and load time persists
- Need to investigate why - unlike before - skim load times are very high while using sharrow.
- David worked on addressing slow chunk times in the SANDAG model.
- One solution is to use explicit chunking
- Probably may need to visit the chunk training model, but these could be addressed later after fixing the other issues
- David would work on explicit chunking (also contingent upon budget)
- Everyone agreed that addressing the skim load time and fixing other sharrow related issues are of higher priority compared to addressing the chunk times
- Discussion on budget: David has sufficient capacity left, Sijia and CS comparatively not so much. Joe is happy with the momentum we are in. Continue with the optimization task, maybe use funds from the on-call support funds as well as a bridge, before more funds can be allocated in the next couple of months.