-
Notifications
You must be signed in to change notification settings - Fork 100
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
🚀 [feat] vehicle type model #486
Conversation
@mxndrwgrdnr - can you adjust this PR to merge to develop? Thanks. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Generally I found the code easy to read and it flowed well. There is a general lack of documentation including:
- docstrings appropriately formatted for autodoc-ing
- typehints
- origin of model coefficients, what estimation are they from, etc
- general documentation about capabilities
- intra-activity sim links to documentation about general activitysim "things" (i.e. various formats, etc)
There is also a lack of tests for the specific unit of the vehicle type choice.
I also found it odd when running the MTC test example locally, that it didn't pick up the config? I suspect I'm doing something wrong but if I'm doing it wrong others might too (maybe?)
BTW I'll probably have more questions once I get the MTC example working on my local machine - i.e. is it annotating tours etc. |
Car_8,.9541,0.0096,0.0355,0.0007,0.0000 | ||
Car_9,.9548,0.0037,0.0409,0.0004,0.0001 | ||
Car_10,0.9530,0.0015,0.0451,0.0003,0.0001 | ||
Car_11,0.9676,0.0096,0.0225,0.0003,0.0000 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seems as though some amount of "smoothing" would make sense here between the years. Many of the numbers for the lower-probability options go back and forth between 0 and >0 several times.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, these probabilities are "lumpy". This is due to the fact that they came directly from the National Household Travel Survey data. A comment about the "lumpy-ness" was added to the models.rst vehicle type choice documentation encouraging the user to change the probabilities to their region and smooth them as they see fit. Additional links in the documentation point to the presented results and sensitivity studies.
So, I suggest we leave the probabilities as is in the context of this pull request for the purpose of simplicity and transparency.
activitysim/examples/example_mtc_extended/configs/vehicle_type_choice_op2_fuel_type_probs.csv
Outdated
Show resolved
Hide resolved
config.config_file_path(vehicle_type_data_file), comment='#') | ||
fleet_year = model_settings.get('FLEET_YEAR') | ||
|
||
vehicle_type_data['age'] = (1 + fleet_year - vehicle_type_data['vehicle_year']).astype(str) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seems to make sense that we would be able to imply a variation in fuel type based on vehicle age. However, in this case age isn't necessarily relative. A 10 year old car in a 2030 scenario should be equal to a 2 year old car in 2022.
I'm trying to figure out how this is accommodated within the probabilities but can't seem to find anywhere that this is done?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, this wasn't done very consistently... I changed the probabilities file to include this same calculation -- the probabilities file now includes body_type and vehicle_year columns just like the vehicle_type_data.csv file and age is explicitly calculated based on the user input fleet_year.
logger = logging.getLogger(__name__) | ||
|
||
|
||
def get_combinatorial_vehicle_alternatives(alts_cats_dict, model_settings): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Suggest adding a unit test for this function.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done. See the abm/test/test_misc/test_vehicle_type_alternatives.py script. Also cleaned up the function to remove model settings and make it better for a stand-alone unit test.
choosers=choosers, | ||
alternatives=alts_wide, | ||
spec=model_spec, | ||
log_alt_losers=log_alt_losers, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Something to consider more holistically in activitysim: most of these function calls seem to require 5-10 parameters.
Many best practices call for <= 3.
All the parameters which relate to run settings (e.g. chunk size, trace label, trace choice name) and not "substance" (e.g. alternatives, estimator, chooser) get in the way of legibility of what is happening.
Don't get me wrong - I love being explicit about things - but I would consider bundling some of these things in a config class that gets passed around or something similar.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, this is something we have been actively discussing as a consortium. I think making changes to this affect is outside the scope of this pull request though.
activitysim/abm/tables/vehicles.py
Outdated
vehicles['vehicle_id'] = vehicles.household_id * 10 + vehicles.vehicle_num | ||
vehicles.set_index('vehicle_id', inplace = True) | ||
|
||
# I do not understand why this line is necessary, it seems circular |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should figure this out before merging
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm getting a numpy comparison error when running Erroring at: File "/Users/elizabeth/Documents/Websites/activitysim/activitysim/core/assign.py", line 286, in assign_variables
expr_values = to_series(eval(expression, globals_dict, _locals_dict))
File "<string>", line 1, in <module>
FutureWarning: elementwise comparison failed; returning scalar instead, but in the future will perform elementwise comparison Numpy version: v1.21 Note I scanned the code where it seems like somebody was trying to get this to just warn not error - but it doesn't seem to be working as run for some reason. I haven't troubleshot too much yet - thought I'd post in case it is a known issue. Full Trace--------------------------------------------------------------- Captured stdout call ----------------------------------------------------------------
Configured logging using basicConfig
INFO:activitysim:Configured logging using basicConfig
INFO - Read logging configuration from: /Users/elizabeth/Documents/Websites/activitysim/activitysim/examples/example_mtc/configs/logging.yaml
INFO - SETTING configs_dir: ['/Users/elizabeth/Documents/Websites/activitysim/activitysim/examples/example_mtc_extended/test/configs', '/Users/elizabeth/Documents/Websites/activitysim/activitysim/examples/example_mtc_extended/configs', '/Users/elizabeth/Documents/Websites/activitysim/activitysim/examples/example_mtc/configs']
INFO - SETTING settings_file_name: settings.yaml
INFO - SETTING data_dir: ['/Users/elizabeth/Documents/Websites/activitysim/activitysim/examples/example_mtc/data']
INFO - SETTING output_dir: /Users/elizabeth/Documents/Websites/activitysim/activitysim/examples/example_mtc_extended/test/output
INFO - SETTING households_sample_size: 10
INFO - SETTING chunk_size: 0
INFO - SETTING chunk_method: hybrid_uss
INFO - SETTING chunk_training_mode: disabled
INFO - SETTING multiprocess: None
INFO - SETTING num_processes: None
INFO - SETTING resume_after: None
INFO - SETTING trace_hh_id: None
INFO - ENV MKL_NUM_THREADS: None
INFO - ENV OMP_NUM_THREADS: None
INFO - ENV OPENBLAS_NUM_THREADS: None
INFO - NUMPY blas_info libraries: ['cblas', 'blas', 'cblas', 'blas']
INFO - NUMPY blas_opt_info libraries: ['cblas', 'blas', 'cblas', 'blas']
INFO - NUMPY lapack_info libraries: ['lapack', 'blas', 'lapack', 'blas']
INFO - NUMPY lapack_opt_info libraries: ['lapack', 'blas', 'lapack', 'blas', 'cblas', 'blas', 'cblas', 'blas']
INFO - run single process simulation
INFO - Time to execute open_pipeline : 0.062 seconds (0.0 minutes)
INFO - preload_injectables
INFO - Time to execute preload_injectables : 0.014 seconds (0.0 minutes)
INFO - #run_model running step initialize_landuse
Running step 'initialize_landuse'
INFO - Reading CSV file /Users/elizabeth/Documents/Websites/activitysim/activitysim/examples/example_mtc/data/land_use.csv
INFO - loaded land_use (25, 24)
INFO - initialize_landuse.annotate_tables - annotating land_use SPEC annotate_landuse
INFO - Network_LOS using skim_dict_factory: NumpyArraySkimFactory
INFO - allocate_skim_buffer shared False taz shape (826, 25, 25) total size: 2_065_000 (2.1 MB)
INFO - _read_skims_from_omx /Users/elizabeth/Documents/Websites/activitysim/activitysim/examples/example_mtc/data/skims.omx
INFO - _read_skims_from_omx loaded 826 skims from /Users/elizabeth/Documents/Websites/activitysim/activitysim/examples/example_mtc/data/skims.omx
INFO - writing skim cache taz (826, 25, 25) to /Users/elizabeth/Documents/Websites/activitysim/activitysim/examples/example_mtc_extended/test/output/cache/cached_taz.mmap
INFO - load_skims_to_buffer taz shape (826, 25, 25)
INFO - get_skim_data taz SkimData shape (826, 25, 25)
INFO - SkimDict init taz
INFO - SkimDict.build_3d_skim_block_offset_table registered 167 3d keys
Time to execute step 'initialize_landuse': 1.82 s
Total time to execute iteration 1 with iteration value None: 1.82 s
INFO - #run_model running step initialize_households
Running step 'initialize_households'
INFO - Reading CSV file /Users/elizabeth/Documents/Websites/activitysim/activitysim/examples/example_mtc/data/households.csv
INFO - full household list contains 5000 households
INFO - sampling 10 of 5000 households
INFO - loaded households (10, 7)
INFO - Reading CSV file /Users/elizabeth/Documents/Websites/activitysim/activitysim/examples/example_mtc/data/persons.csv
INFO - loaded persons (28, 7)
INFO - initialize_households.annotate_tables - annotating persons SPEC annotate_persons
INFO - initialize_households.annotate_tables - annotating households SPEC annotate_households
INFO - initialize_households.annotate_tables - annotating persons SPEC annotate_persons_after_hh
Time to execute step 'initialize_households': 0.37 s
Total time to execute iteration 1 with iteration value None: 0.37 s
INFO - #run_model running step compute_accessibility
Running step 'compute_accessibility'
INFO - Running compute_accessibility with 25 orig zones 25 dest zones
INFO - compute_accessibility Running adaptive_chunked_choosers with 25 choosers
INFO - Running chunk 1 of 1 with 25 of 25 choosers
INFO - Running compute_accessibility with 25 orig zones 25 dest zones
INFO - compute_accessibility computed accessibilities (25, 10)
Time to execute step 'compute_accessibility': 0.06 s
Total time to execute iteration 1 with iteration value None: 0.06 s
INFO - #run_model running step school_location
Running step 'school_location'
INFO - Running school_location.i1.sample.university with 4 persons
INFO - school_location.i1.sample.university.interaction_sample Running adaptive_chunked_choosers with 4 choosers
INFO - Running chunk 1 of 1 with 4 of 4 choosers
INFO - Running eval_interaction_utilities on 24 rows
INFO - Running school_location.i1.logsums.university with 11 rows
ERROR - assign_variables - FutureWarning (elementwise comparison failed; returning scalar instead, but in the future will perform elementwise comparison) evaluating: np.where(sov_auto_op_cost.isna() | (sov_veh_option == 'non_hh_veh'), costPerMile, sov_auto_op_cost)
numpy.core._exceptions._UFuncInputCastingError: Cannot cast ufunc 'equal' input 1 from dtype('<U10') to dtype('float64') with casting rule 'same_kind'
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/Users/elizabeth/Documents/Websites/activitysim/activitysim/core/assign.py", line 286, in assign_variables
expr_values = to_series(eval(expression, globals_dict, _locals_dict))
File "<string>", line 1, in <module>
FutureWarning: elementwise comparison failed; returning scalar instead, but in the future will perform elementwise comparison
INFO - Time to execute all models until this error : 2.814 seconds (0.0 minutes)
ERROR - activitysim run encountered an unrecoverable error
numpy.core._exceptions._UFuncInputCastingError: Cannot cast ufunc 'equal' input 1 from dtype('<U10') to dtype('float64') with casting rule 'same_kind'
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/Users/elizabeth/Documents/Websites/activitysim/activitysim/cli/run.py", line 260, in run
pipeline.run(models=config.setting('models'), resume_after=resume_after)
File "/Users/elizabeth/Documents/Websites/activitysim/activitysim/core/pipeline.py", line 617, in run
run_model(model)
File "/Users/elizabeth/Documents/Websites/activitysim/activitysim/core/pipeline.py", line 476, in run_model
orca.run([step_name])
File "/Users/elizabeth/opt/miniconda3/envs/asim/lib/python3.9/site-packages/orca/orca.py", line 2168, in run
step()
File "/Users/elizabeth/opt/miniconda3/envs/asim/lib/python3.9/site-packages/orca/orca.py", line 973, in __call__
return self._func(**kwargs)
File "/Users/elizabeth/Documents/Websites/activitysim/activitysim/abm/models/location_choice.py", line 920, in school_location
iterate_location_choice(
File "/Users/elizabeth/Documents/Websites/activitysim/activitysim/abm/models/location_choice.py", line 774, in iterate_location_choice
choices_df, save_sample_df = run_location_choice(
File "/Users/elizabeth/Documents/Websites/activitysim/activitysim/abm/models/location_choice.py", line 619, in run_location_choice
run_location_logsums(
File "/Users/elizabeth/Documents/Websites/activitysim/activitysim/abm/models/location_choice.py", line 438, in run_location_logsums
logsums = logsum.compute_logsums(
File "/Users/elizabeth/Documents/Websites/activitysim/activitysim/abm/models/util/logsums.py", line 142, in compute_logsums
expressions.assign_columns(
File "/Users/elizabeth/Documents/Websites/activitysim/activitysim/core/expressions.py", line 124, in assign_columns
results = compute_columns(df, model_settings, locals_dict, trace_label)
File "/Users/elizabeth/Documents/Websites/activitysim/activitysim/core/expressions.py", line 95, in compute_columns
= assign.assign_variables(expressions_spec,
File "/Users/elizabeth/Documents/Websites/activitysim/activitysim/core/assign.py", line 298, in assign_variables
raise err
File "/Users/elizabeth/Documents/Websites/activitysim/activitysim/core/assign.py", line 286, in assign_variables
expr_values = to_series(eval(expression, globals_dict, _locals_dict))
File "<string>", line 1, in <module>
FutureWarning: elementwise comparison failed; returning scalar instead, but in the future will perform elementwise comparison
--------------------------------------------------------------- Captured stderr call ----------------------------------------------------------------
numpy.core._exceptions._UFuncInputCastingError: Cannot cast ufunc 'equal' input 1 from dtype('<U10') to dtype('float64') with casting rule 'same_kind'
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/Users/elizabeth/Documents/Websites/activitysim/activitysim/examples/example_mtc_extended/test/simulation.py", line 15, in <module>
sys.exit(run(args))
File "/Users/elizabeth/Documents/Websites/activitysim/activitysim/cli/run.py", line 260, in run
pipeline.run(models=config.setting('models'), resume_after=resume_after)
File "/Users/elizabeth/Documents/Websites/activitysim/activitysim/core/pipeline.py", line 617, in run
run_model(model)
File "/Users/elizabeth/Documents/Websites/activitysim/activitysim/core/pipeline.py", line 476, in run_model
orca.run([step_name])
File "/Users/elizabeth/opt/miniconda3/envs/asim/lib/python3.9/site-packages/orca/orca.py", line 2168, in run
step()
File "/Users/elizabeth/opt/miniconda3/envs/asim/lib/python3.9/site-packages/orca/orca.py", line 973, in __call__
return self._func(**kwargs)
File "/Users/elizabeth/Documents/Websites/activitysim/activitysim/abm/models/location_choice.py", line 920, in school_location
iterate_location_choice(
File "/Users/elizabeth/Documents/Websites/activitysim/activitysim/abm/models/location_choice.py", line 774, in iterate_location_choice
choices_df, save_sample_df = run_location_choice(
File "/Users/elizabeth/Documents/Websites/activitysim/activitysim/abm/models/location_choice.py", line 619, in run_location_choice
run_location_logsums(
File "/Users/elizabeth/Documents/Websites/activitysim/activitysim/abm/models/location_choice.py", line 438, in run_location_logsums
logsums = logsum.compute_logsums(
File "/Users/elizabeth/Documents/Websites/activitysim/activitysim/abm/models/util/logsums.py", line 142, in compute_logsums
expressions.assign_columns(
File "/Users/elizabeth/Documents/Websites/activitysim/activitysim/core/expressions.py", line 124, in assign_columns
results = compute_columns(df, model_settings, locals_dict, trace_label)
File "/Users/elizabeth/Documents/Websites/activitysim/activitysim/core/expressions.py", line 95, in compute_columns
= assign.assign_variables(expressions_spec,
File "/Users/elizabeth/Documents/Websites/activitysim/activitysim/core/assign.py", line 298, in assign_variables
raise err
File "/Users/elizabeth/Documents/Websites/activitysim/activitysim/core/assign.py", line 286, in assign_variables
expr_values = to_series(eval(expression, globals_dict, _locals_dict))
File "<string>", line 1, in <module>
FutureWarning: elementwise comparison failed; returning scalar instead, but in the future will perform elementwise comparison Conda List
|
Sorry about that, the setup I had was pointing to some other configs. This test should be passing now. |
@e-lo I believe I have addressed all of your comments. I left a couple conversations open for you to sign off on. Thanks for your helpful comments and please let me know if there's anything else I missed. |
activitysim/examples/example_mtc/configs/annotate_households.csv
Outdated
Show resolved
Hide resolved
Have all comments been addressed? Please advise, we have several regions waiting to pull this model into their implementation. Thanks! |
@jfdman Since all vehicle make/model coding for the NHTS (since inception) maps to NHTSA’s FARS vehicle make/mode coding scheme [https://www.nhtsa.gov/research-data/fatality-analysis-reporting-system-fars], can you confirm that our vehicle type model conforms with NHTSA’s FARS? |
Confirmed all previous comments are addressed. Guy's question is the only one left. Thanks. |
User Story
As a planner, I would like to understand the effects of vehicle type choice on travel behavior in order to support more detailed emissions analysis and more refined travel costs.
Requirements
Per https://github.com/ActivitySim/activitysim/wiki/Phase-6b-Scope-of-Work#task-2-vehicle-type-model
After applying this submodel, household vehicles should have the following properties:
In addition, each auto tour should be assigned the most likely household vehicles based on:
*Note: For this task, vehicle allocation to auto tours should not consider the availability of each individual vehicles. *
The implementation should have the following usability features:
Approach
Model Estimation
Implementation
Issues
Fixes #438