-
Notifications
You must be signed in to change notification settings - Fork 100
Project Meeting 2024.08.08
Michelle Bina edited this page Aug 13, 2024
·
8 revisions
- AMPO Updates
- RSG/Driftless Labs presentation on proposed Estimation Mode enhancements
- AMPO has decided to sunset their administrative role in ActivitySim, effective in 11 months. AMPO is proud of what has been accomplished and think the Consortium is well positioned to continue to maintain and grow the platform and consortium.
- Internally, they have started to gather all the documentation and history to hand over to the next administrator.
- In the Consortium partners-only meeting next Thursday (8/15), this will be the only topic on the agenda. Each member is encouraged to bring ideas for new administration.
- The new release (1.3) is very close, expected within the next week. Members are invited to test out the new code and create issues if anything breaks.
See presentation: Phase9B_estimation_enhancements_pt1.pptx
- Current process
- An initial set of steps that include processing raw survey data, formatting and cleaning the data to align with ActivitySim, and an infer.py step that will match with alternative numbers to merge the data with the ActivitySim levers. Then you run ActivitySim and are likely to get an error because of some quirk in the data that doesn’t completely align with ActivitySim. This iterates until you no longer encounter errors.
- Once you have your estimation data bundles created, you can use the post-processing Larch estimation functionality (demos set up in Jupyter notebooks) to iteratively work with ActivitySim to have new variables and new terms to try in the specification for whichever model that you’re working on.
- For example, work from home is very specific to the survey data and the model. There are many ways to ask questions about it and to account for it in models. There isn’t much to do to streamline the initial processing of the data, but improvements can be made during the iteration processes between ActivitySim estimation mode and Larch.
- In estimation mode, you feed it an initial specification that you want test, it writes out the model spec, runs the model like normal, and then overwrites modeled choice with survey choice. Out of this process is an estimation data bundle including the model settings that you have in your configs folder, the coefficients and the specs, and the chooser data and evaluated expressions. Those data get loaded into Larch and the Larch outputs some summaries to review the output statistics and newly estimated coefficients to plug back into the model.
- However, a lot of the time, we want to make changes to the specification – such as remove or add new terms. The process needs to be iterated again to make the new calculations with the new spec, to get the utilities in the estimation data bundle. The proposed improvements will make it easier to pick stuff and then try it out and not have to go back to ActivitySim constantly to recalculate values.
- Improvements
- Run time improvements
- Estimation mode currently runs in single-process. If you have different processes trying to write to the same file, it can crash with permission errors. To get around this is, we're going to write each subset of the estimation data bundle to its own folder and output file, and then we'll coalesce them all together at the end so that they can be read in as one full complete file.
- Estimation data bundle writing is slow and an improvement would be to move to multiprocess. To increase the speed and decrease space, files can be written to non-CSV formats like parquet. We did this for ActivitySim pipeline files and will do the same thing for estimation data bundles.
- Estimation files are very large because, in destination choice models for example and especially with MAZs, since there is data for every single alternative. Writing to disk is very slow and it's slow to work in Larch. Instead of writing them all out, destinations in estimation data bundle can be sampled in estimation data bundle. The BayDAG PR included some of this, but we need to make sure it works universally and update documentation to show how it works with Larch.
- All improvements would be invisible to the users, with everything handled under the hood.
- Is there going to be benchmarking statements on this?
- Will do that for the SF dataset that they have available, but not for other surveys.
- For SANDAG, it was about an 8-hour run time (but not sure that was all the models) and 40GB estimation data bundle.
- Run time improvements
- Usability Improvements
- Removing ActivitySim Loop
- Want to remove the iterative loop of running ActivitySim to seamlessly modify the specs and updated coefficients.
- Experience would be to keep the specs open, modify, and then hit run on estimation in Larch.
- When adding a new variable, the expressions would be calculated directly so that (assuming the data already exists in the estimation data) it would calculate the utility on the fly.
- Larch Reporting
- Estimation can fail for several reasons and Larch reports them, but we want to do some high level error reporting to help the user catch what’s going on.
- Add Predict Functionality
- In order to get comparisons on how well the predictions are being made to the survey data, you currently have to go to ActivitySim. This effort will add that functionality to Larch.
- Removing ActivitySim Loop
- Questions
- Is the performance testing going to be one- and two-zone models, or just two?
- Performance enhancements are expected to have greater benefits for the two-zone models because the files sizes (and computations) are much bigger.
- No difference in the code for one or two zone systems, it won't differentiate.
- One of the benefits of using only one set of survey data (the SF one), is that it’s not pure survey data (it's small, synthesized/fake survey data) and can be made publicly available.
- Want to make sure there is consensus on the dataset used for testing, so partners to continue to think about this.
- Is the performance testing going to be one- and two-zone models, or just two?