-
Notifications
You must be signed in to change notification settings - Fork 100
Agency Visualization Requirements
This is part of Task 4 of the Phase 6B work plan.
Stakeholders: Please fill out this survey of the existing tools and workflows in use at your agency. We are interested in how you manage your model runs, what tools you use when analyze the results of those runs, and what capabilities you particularly like or particularly feel are missing/need improvement.
After your agency has filled out the document, we will follow up with each of you for a brief interview or "show and tell" so we can better understand what is working and what is missing.
Please consider the use cases mentioned in the task order when you are answering:
- Organize/access model results
- Model debugging
- Calibration/validation
- Project analysis
- Scenario comparisons
- Results presentation
This is all going into one Wiki document so you can all see each other's responses: but PLEASE only edit the section for your own agency! Thanks! ✨
1. Run management. How do you currently manage model run inputs and outputs? Think about the different contexts such as model development, calibration/validation, and project-based model application work.
Table attributes for the Population Synthesizer (ARC’s PopSyn) and ARC’s ABM input & output databases are stored in the on-premises ARC SQL server. The ARC PopSyn was initially written in Java and designed to run in the MS SQL Enterprise edition. More recently, ARC revised PopSyn so that it can be run in MySQL instead of Microsoft SQL.
2. Visualization now: which tools do you currently use for visualizing model outputs? Do you have proprietary packages, agency-written scripts/tools, etc. Is any of it web-based?
ARC currently uses and maintains ABMVIZ / ActivityViz, for more details see ARC ABMVIZ (atlregional.github.io) https://atlregional.github.io/ActivityViz/ Please note that we recently deleted the old model runs that were there because they were from the ARC 2016 major plan update, and we did not want anyone to get confused. At some point in time, we are going to upload the data from the most recent ARC ABM model runs, whenever we are allowed back in the office, due to the pandemic. Once we get back into the office routinely, we will get back to regularly updating the site with each TIP/RTP plan amendment, as was the case prior to the pandemic.
3. Analysis needs: what do you feel is currently missing from your analyst visualization toolkit? Are there things you can't do at all that you need? Are there things you can do, but they are onerous or annoying or difficult?
ARC currently maintains its ABMVIZ tool to visualize results from model output in a GitHub-based platform. The original intent of this platform was to have a public facing tool; however, these model outputs coincide with official scenarios that could be released to the modeling community. Recently, ARC modelers expressed an interest in the creation of a tool that would allow them to quickly and efficiently analyze scenario tests that are not part of official model releases. An example use case would be during model calibration/validation when the model is applied many times. ARC modelers plan on creating a pilot version of a new tool by converting certain parts of the ARC ABMVIZ visual into R Markdown, by developing several routines and visualization tools using R. As part of model development, numerous R Markdown HTML reports will be developed to visualize model factors and parameters for quick review and troubleshooting. Furthermore, these detailed HTML reports will help in accelerating model calibration and validation. Where possible, these reports would color-code the technical results (green for values meeting validation benchmarks, red for values outside validation benchmarks). Reports will be produced for each step of the model. These reports would remove the need for manual Excel-based reports and could be updated in less than 20 minutes while providing extensive analysis into the model’s performance.
4. Outward facing visualization: Stepping away from an internal analyst role and thinking about outward presentation, what would you need to help convey model outputs? What tools do you use to get from model outputs to Board, TRB presentations, and to the public?
ARC modelers also plan on developing an in-house data analysis tool in R Shiny to generate easy to use outputs that assist in running model results analysis. Using the R shiny interface, the tool will allow the user to select a transit route or a roadway corridor, the day-of-week, and the range of days for analysis. The time-of-day definitions would be updated by the user through an Excel input file.
5. At your agency, is a common viz toolkit something that would be added on top of existing tools that you already love, or more likely to be something that replaces existing workflows (if any)? We ask because the task clearly states that the consortium will coalesce around a common set of tools, and that will require changes for all of you!
ARC modelers propose to schedule a workshop internally to review the existing visuals with ARC planning staff to determine the usefulness of the existing visuals and ARC’s long-term goals for the revised ABMVIZ. The intent of this workshop would be to outline a plan for which visuals to begin transitioning and the development of new visuals.
1. Run management. How do you currently manage model run inputs and outputs? Think about the different contexts such as model development, calibration/validation, and project-based model application work.
The Met Council uses Cube’s Scenario Manager to manage model runs. The Council creates a catalog file for the base year model. New scenarios are added to this catalog using the Scenario pane. Details about the scenario are contained within the Properties field in the Scenario Pane. (NOTE: the Council plans to eventually transition to running its model using a batch file instead of the catalog.)
Input files for all scenarios are stored in an input folder. Output files for each scenario are stored in a folder named after the Scenario. Model runs for planning work are copied onto a Council network drive.
2. Visualization now: which tools do you currently use for visualizing model outputs? Do you have proprietary packages, agency-written scripts/tools, etc. Is any of it web-based?
Most of the model summaries are performed using R and ArcMap. Staff have written some R scripts which summarize model results commonly used as performance measures. Analysis of spatial data, such as segment volumes, are typically visualized using ArcMap.
Council staff have created some R Shiny apps that can be used to compare the results across different scenarios.
The Council recently acquired a limited number of licenses for Tableau and plans to explore building tools to visualize model outputs using this tool.
3. Analysis needs: what do you feel is currently missing from your analyst visualization toolkit? Are there things you can't do at all that you need? Are there things you can do, but they are onerous or annoying or difficult?
Many of the comparisons we do across scenarios involve aggregate measures such as VMT and number of trips. It would be useful to have some visualization tool that would allow us to compare outputs at various points in the model (e.g. outputs from different submodels) to try and see what is causing different outcomes across scenarios.
It would also be useful to have visualization tools that quickly produce maps of spatial trip making patterns and network indicators. It would be helpful to have tools which intuitively show how origins and destinations are changing across scenarios.
4. Outward facing visualization: Stepping away from an internal analyst role and thinking about outward presentation, what would you need to help convey model outputs? What tools do you use to get from model outputs to Board, TRB presentations, and to the public?
A tool that combines the ability to create interactive maps and visualizations would be useful. These should also produce high-resolution static images that can easily fit into Powerpoints and PDFs.
5. At your agency, is a common viz toolkit something that would be added on top of existing tools that you already love, or more likely to be something that replaces existing workflows (if any)? We ask because the task clearly states that the consortium will coalesce around a common set of tools, and that will require changes for all of you!
The answer depends on:
-
The type of technology used. If the technology will use a visualization technology we are already familiar with (in order of preference – Shiny, Tableau, PowerBI), we would be more likely to replace any currently produced interactives with those produced by the platform. If the technology used is something we are less familiar with (e.g., custom JavaScript), we would be more likely to use the consortium’s tool as a supplement to existing tools.
-
The ability to easily customize visualizations to fit the information, message and style we are trying to convey. Good documentation and how-to’s will be necessary.
-
The commitment from the consortium for troubleshooting, fixes, feature requests and updates. Maintenance of visualizations often takes more staff-hours than the initial creation of the tool. Will there be a system for submitting feature requests and bug reports, e.g., GitHub? Who will be responsible for fixing them, and on what timeline?
1. Run management. How do you currently manage model run inputs and outputs? Think about the different contexts such as model development, calibration/validation, and project-based model application work.
Model runs are given a unique identifier, which includes model year, model version, project, scenario and version (e.g. 2035_TM152_FBP_Plus_20: model year=2035, model version=TM1.5.2, project = Final Blueprint, Scenario = Plus, Version = 20). We typically make an asana task for each model run which includes detailed notes on what the source of the inputs are, the purpose of the run (and how it differs for previous runs), and discussion of results. The run itself is setup by a batch file which copies inputs from various sources, sometimes intelligently depending on model year, model version, project, and scenario.
Input networks are built by NetworkWrangler spec for Blueprint, typically for a given project and for all model years at the same time; these are saved into a versioned network file folder and not modified after being built.
Outputs are copied from model run machines to network file storage. We do not use relational databases for inputs nor outputs.
For an arbitrary set of model runs listed in a csv, we have scripts and tableau workbooks that will easily show a number of input and output summaries across all of those model runs.
2. Visualization now: which tools do you currently use for visualizing model outputs? Do you have proprietary packages, agency-written scripts/tools, etc. Is any of it web-based?
We use tableau workbooks because they’re easy to use to explore output and for debugging, as well as a set of tableau workbooks that we update and publish (mostly automatically) to internal and external websites.
3. Analysis needs: what do you feel is currently missing from your analyst visualization toolkit? Are there things you can't do at all that you need? Are there things you can do, but they are onerous or annoying or difficult?
I personally love the tableau approach because it’s fairly quick to setup/use and it publishes to web well. Some of my staff who haven’t worked as much with tableau aren’t as fond of it and are more comfortable with other tools (python notebooks, R scripts or Excel).
4. Outward facing visualization: Stepping away from an internal analyst role and thinking about outward presentation, what would you need to help convey model outputs? What tools do you use to get from model outputs to Board, TRB presentations, and to the public?
Since tableau workbooks can be exported to pdf, I think they can be made more polished if necessary. I also really like the ease of communicated via publishing to web, especially since those websites can be fairly interactive (for example, this Roadway validation
5. At your agency, is a common viz toolkit something that would be added on top of existing tools that you already love, or more likely to be something that replaces existing workflows (if any)? We ask because the task clearly states that the consortium will coalesce around a common set of tools, and that will require changes for all of you!
I think you can probably tell that I am pretty wedded to using tableau because of the flexibility it offers for data exploration and making new vizzes to answer new questions. However, if we had a standard viz that was useful, easy to produce and easy to share, I would certainly be happy to use it.
1. Run management. How do you currently manage model run inputs and outputs? Think about the different contexts such as model development, calibration/validation, and project-based model application work.
MWCOG currently manages model run inputs and outputs in file folders. The ActivitySim-based Gen3 Model that is currently under development will likely adopt this system. Cube does provide a scenario manager but MWCOG staff intentionally do not use it because of the added complexities.
2. Visualization now: which tools do you currently use for visualizing model outputs? Do you have proprietary packages, agency-written scripts/tools, etc. Is any of it web-based?
Currently, MWCOG mainly uses ArcGIS to visualize outputs from our trip-based Gen2 Model. We plan to adopt a R-based ABM Visualizer developed by RSG for the ActivitySim-based Gen3 Model.
3. Analysis needs: what do you feel is currently missing from your analyst visualization toolkit? Are there things you can't do at all that you need? Are there things you can do, but they are onerous or annoying or difficult?
Gen3 Model is currently at the Phase 1 Model Deployment stage. We will further evaluate ABM Visualizer when we gain more hands-on experiences with the model and the visualization tool.
4. Outward facing visualization: Stepping away from an internal analyst role and thinking about outward presentation, what would you need to help convey model outputs? What tools do you use to get from model outputs to Board, TRB presentations, and to the public?
ABM Visualizer was mainly designed for internal use. We are interested in visualization tools that can convey model outputs (e.g., outputs from a race-related equity analysis) in less technical and more straightforward terms to the Board or the public.
5. At your agency, is a common viz toolkit something that would be added on top of existing tools that you already love, or more likely to be something that replaces existing workflows (if any)? We ask because the task clearly states that the consortium will coalesce around a common set of tools, and that will require changes for all of you!
At the early stage of Gen3 Model development, our agency is open to either option.
1. Run management. How do you currently manage model run inputs and outputs? Think about the different contexts such as model development, calibration/validation, and project-based model application work.
- Models are implemented in Cube’s Application Manager. A new .cat file is developed for each Model of Record (MOR), which happens when a base year is validated, or a new LRP is developed. All model runs are added in the Scenario Pane, and when input files are modified in Scenario Manager, it leaves documentation on what input files and what settings were run for each project/year/scenario.
2. Visualization now: which tools do you currently use for visualizing model outputs? Do you have proprietary packages, agency-written scripts/tools, etc. Is any of it web-based?
- Work to develop a Power-BI dashboard is ongoing. This will be used to show ABM trip list outputs. It will possibly be edited to show output network statistics as well. Currently, a set of Cube scripts are run at the end, which produces validation statistics (highway and transit). These .dbf tables can be imported via macro into an Excel workbook and graphs are updated. Additionally, because Ohio has network standards (with common attribute names and definitions), a standard .vpr file is available for all Ohio models that can display standard network fields, e.g. V/C ratio or difference for assigned links with counts. Some R scripts are also available to display outputs from the ABM.
3. Analysis needs: what do you feel is currently missing from your analyst visualization toolkit? Are there things you can't do at all that you need? Are there things you can do, but they are onerous or annoying or difficult?
- Subarea assignment validation statistics were supposed to be included, but were forgotten. (Isn’t hard to add, but needs a day.) The Power-BI effort is to make the display of the trip list easier and to develop some standard displays. (Currently, cube scripts are used to tabulate the trip lists.)
4. Outward facing visualization: Stepping away from an internal analyst role and thinking about outward presentation, what would you need to help convey model outputs? What tools do you use to get from model outputs to Board, TRB presentations, and to the public?
- TAZ maps are developed for the Base, TIP and LRP years showing various land use and employment/population and their change/growth. These are used in our Design Traffic Early Coordination meetings so that all parties can determine whether development is included in the MOR or not. Additionally, all MPOs have an EJ process and maps and tables are generated for those. However, those are not standard across Ohio.
5. At your agency, is a common viz toolkit something that would be added on top of existing tools that you already love, or more likely to be something that replaces existing workflows (if any)? We ask because the task clearly states that the consortium will coalesce around a common set of tools, and that will require changes for all of you!
- Depends on what is developed. Probably both, and potentially the answer is different depending on whether we are discussing the MPO or Statewide implementation. (Statewide obviously has additional required visualization.)
1. Run management. How do you currently manage model run inputs and outputs?
In short the answer is "Informally".
In long - Anything that is spatial (input / output) we try and retain in our PTV (Visum) database - this includes all network and zonal (three zone system) inputs and outputs. Visum is our one stop shop for spatial input and output data.
There are a large number of inputs/outputs that aren't spatial for these we try to keep everything in easy to machine readable csvs (or text files). We attempt to keep all inputs and outputs documented on our wiki (for meta data). Matrices are stored in OMX. For calibration data we strive to put target data into the same format as ABM output - that way any visualization / comparison tool we develop for calibration also works for ABM-2-ABM run comparison.
I would say our outputs (from CT-RAMP) are currently a mess. What we are pushing ActivitySim towards is to have an output structure that looks just like an augmented survey (with a single household table, person table, trip table, vehicle table...). There will likely be hundreds to thousands of interim decision context fields on some of these tables (like the trip table potentially). These should be stored in hdf5, but be user specified for which fields are exported in the user friendly csv that is used for 99% of analysis / visualization. This is a wish list design element, not current practice.
2. Visualization now: which tools do you currently use for visualizing model outputs? We use PTV-Visum to visual spatial data, along with ESRI-ArcGIS and QGIS We have an RSG custom written (in R) html visualizer to review ABM-2-ABM outputs. Additionally, since we have processed our target data to look like ABM outputs, we use this visualizer to compare the ABM to target data during build / calibration.
Everything else is custom written scripts for common data analysis tasks/requests. We are working on building a documented wiki library of these various task/request specific "functions" we are putting together.
Nothing we have is web-based, but we are looking to put our functions / tools on the web (shared) and documented.
3. Analysis needs: what do you feel is currently missing from your analyst visualization toolkit? As noted in 1 - we want the ABM outputs better formatted - I think we are working towards that with ActivitySim. Other than that, we are pretty happy with our visualization tools for CT-RAMP, we are hoping to get similar (maybe improved) functionality in ActivitySim. A critical element for us is to be able to do multiple scenario comparisons. At least 2 scenarios. Looking at the results from one run is of very little value for us. Our current visualization tools are really only built for looking at 2 scenarios. It would be a very interesting and meaningful upgrade to be able to look at X number of scenarios with the given tool.
4. Outward facing visualization: Stepping away from an internal analyst role and thinking about outward presentation, what would you need to help convey model outputs? I think "public ready" visualization is something that maybe gets built way down the road. I think analyst tools and assistance is the primary (lowest fruit). For Oregon, outward facing visualizations have been and are envisioned to be developed on an as needed basis and few (if any) follow the same approach/output - each one is it's own snowflake.
5. At your agency, is a common viz toolkit something that would be added on top of existing tools that you already love, or more likely to be something that replaces existing workflows (if any)? Our hope would be that the common viz toolkit would be the toolkit that we use. But if it falls short, we will be using existing tools or building work arounds. But the hope is that over time we have less and less Oregon specific tools and more and more we have common viz tools.
1. Run management. How do you currently manage model run inputs and outputs? Think about the different contexts such as model development, calibration/validation, and project-based model application work.
Inputs that may vary by scenario, which are typically land use and network files, are stored on a network drive in folders that are uniquely named for a given scenario. Land use and networks are stored as unique scenarios so that a given network can be run with any land use and vise versa (constrained by forecast year). The model config file has five parameters that must be set: inputs_dir, forecast_year, base_year, landuse_inputs & network_inputs. During a model run, these files are automatically copied to the model run location. Static files that do not change for a given scenario are stored in a sqlite database, which is accessed by the model throughout a run.
Outputs are stored in folders within the model run.
2. Visualization now: which tools do you currently use for visualizing model outputs? Do you have proprietary packages, agency-written scripts/tools, etc. Is any of it web-based?
We create pre-aggregated .csv outputs that are controlled by an expression file. These files are then used to create automated Jupyter notebook html files. So when a run is complete, there should be a bunch of html pages for various summaries, including base year validation comparisons. We can also visualize a run using our Plotly Dash dashboard, which uses the same .csv files and runs on an aws linux instance. The dashboard is a fully interactive and web-based app, so it offers a centralized, easily accessible place to examine/compare various scenarios.
3. Analysis needs: what do you feel is currently missing from your analyst visualization toolkit? Are there things you can't do at all that you need? Are there things you can do, but they are onerous or annoying or difficult?
Visually tracing/mapping a person/household tours and trips is something that is missing from our visualization toolkit. We can do it, but it is not automated so it is pretty onerous.
4. Outward facing visualization: Stepping away from an internal analyst role and thinking about outward presentation, what would you need to help convey model outputs? What tools do you use to get from model outputs to Board, TRB presentations, and to the public?
I think this still depends on the audience. I really like applications like MTC's Vital Signs, but something else may be needed for a Planning Board/Committee. I guess having a tech in place that can easily stand up custom web-based applications/visualizations for different purposes is what is needed. R Shiny, Plotly Dash & Tableau seem to fill this role at PSRC.
5. At your agency, is a common viz toolkit something that would be added on top of existing tools that you already love, or more likely to be something that replaces existing workflows (if any)? We ask because the task clearly states that the consortium will coalesce around a common set of tools, and that will require changes for all of you!
Change is fine, but I think it is important that a run automatically creates summaries that are immediately accessible and I think Jupyter notebooks--> html outputs makes a lot of sense for this. I would say the majority of our runs are looked at quickly and then deleted, so they often do not make it to our dashboard. We like Plotly Dash for web-based dashboards and have a lot of experience with it, but we are very open to other frameworks. Shiny is used quite heavily by our organization and I see a lot of positives around it.
1. Run management. How do you currently manage model run inputs and outputs? Think about the different contexts such as model development, calibration/validation, and project-based model application work.
- A protected standard release folder contains the software, configuration settings, and common input files. From a GUI, an analyst first generates a scenario using the standard release, copying over the software, common inputs, and setting up scenario specific configurations. The analyst then modifies inputs for the specific scenario, such as network changes. Then, the analyst starts the model run inside EMME via a GUI, including options to skip some steps if desired.
- Inputs and raw outputs are flat files or EMME matrices in the scenario folder. An automated data exporting step is run after model completion to transform outputs and some inputs to flat files that are ready for loading into a SQL database. Database loading is automated and is scheduled at night time to avoid choking the database. If needed, a DBA can manually loading a scenario into the database. The data exporting step appends skims to the trip lists, which are loaded into the database. Matrix skims are not loaded into the database because they are very large.
- Each scenario loaded into the database is assigned with a unique scenario ID, year, analyst name, etc. A set of standard SQL stored procedures are used to generate reports and performance metrics.
- A separate Python-based procedure is used to generate multiple scenario comparisons, including mode shares, trip length, trip purposes, total demand, VMT, VHT, and VHD etc. This Python-based procedure grab data directly from raw outputs, skipping the SQL database. We use this procedure for internal model result investigations.
- For a base year, a validation folder is created with an automated step to generate comparisons (a flat file) between counts/observed speeds and model estimated volumes/speeds. Then the analyst run a separate step to generate a PowerBI based story map using the comparison file. An example PowerBI validation story map: https://storymaps.arcgis.com/stories/514f1f63879945999c6fe31c3fc7f666
- A stand alone web-based visualization of SB743 VMT metrics,https://www.arcgis.com/apps/webappviewer/index.html?id=5b4af92bc0dd4b7babbce21a7423402a.
2. Visualization now: which tools do you currently use for visualizing model outputs? Do you have proprietary packages, agency-written scripts/tools, etc. Is any of it web-based?
- For model calibration results, we use a html based visualizer developed by RSG, similar to some other member agencies. For base year model validation, we use a PowerBI-based story map (see above).
3. Analysis needs: what do you feel is currently missing from your analyst visualization toolkit? Are there things you can't do at all that you need? Are there things you can do, but they are onerous or annoying or difficult?
- A unified/standard visualization toolkit that utilizes the same platform, such as PowerBI.
- Visualization of the reports/tables generated from the SQL stored procedures.
- A tier-ed visualization platform that caters to different needs of modeling staff, planning staff, board members, and the general public.
4. Outward facing visualization: Stepping away from an internal analyst role and thinking about outward presentation, what would you need to help convey model outputs? What tools do you use to get from model outputs to Board, TRB presentations, and to the public?
- Most board presentations are via an additional step, working with GIS staff, to create static maps from modeling data. It would be great if the ActivitySim visualization tool can reduce the need for static maps.
5. At your agency, is a common viz toolkit something that would be added on top of existing tools that you already love, or more likely to be something that replaces existing workflows (if any)? We ask because the task clearly states that the consortium will coalesce around a common set of tools, and that will require changes for all of you!
- Our PowerBI based validation story map is well received by most stakeholders. It would be great to build other missing visualization pieces on top of it.
1. Run management. How do you currently manage model run inputs and outputs? Think about the different contexts such as model development, calibration/validation, and project-based model application work.
We are currently using folders to organize different model runs/scenarios. Most likely, we will use similar structures when SEMCOG adopts the ActivitySim-based ABM (ActSim).
2. Visualization now: which tools do you currently use for visualizing model outputs? Do you have proprietary packages, agency-written scripts/tools, etc. Is any of it web-based?
We use the visualization functions from the TransCAD software, which is only available to a few modelers, and various ArcGIS tools, which are widely available in the agency.
In addition, SEMCOG’s trip-based model developed standalone HTML reports & TransCAD map utilities to present the results from different model components. Both CVM and ABM under development have their own dashboards to report model outputs. The ABM visualizer compares the model outputs to survey/census data when available.
SEMCOG also purchased limited seats of CARTO licenses in developing dynamic online maps, such as a web-based interactive map showing volumes for various model years.
3. Analysis needs: what do you feel is currently missing from your analyst visualization toolkit? Are there things you can't do at all that you need? Are there things you can do, but they are onerous or annoying or difficult?
It would be nice to have visualization tools, which can be customized by end-users to choose what to show and how to show. We also wish there would be easy-to-use tools help us compare multiple scenarios and present the results to non-modelers effectively.
One other area would be helpful is to have some tools to query matrix values. For example, creating a graph showing O/D where travel time is above a certain value or lies between a specific range. This link (https://www.linkedin.com/pulse/transportation-data-visualization-1-migration-flow-jason-li/) has some good examples to visualize OD flows. The 3D view of a matrix in TransCAD has this option too, but it can be enhanced.
4. Outward facing visualization: Stepping away from an internal analyst role and thinking about outward presentation, what would you need to help convey model outputs? What tools do you use to get from model outputs to Board, TRB presentations, and to the public?
We use any tools available to us (mentioned above) to manipulate and present model outputs.
It will be helpful to present the model results to the public if we can connect the model results to the audience (eg. how is he/she being represented in our model?)
5. At your agency, is a common viz toolkit something that would be added on top of existing tools that you already love, or more likely to be something that replaces existing workflows (if any)? We ask because the task clearly states that the consortium will coalesce around a common set of tools, and that will require changes for all of you!
We prefer no-cost/low-cost options but we are also open to any easy-to-use/cost-effective options.
1. Run management. How do you currently manage model run inputs and outputs? Think about the different contexts such as model development, calibration/validation, and project-based model application work.
SFCTA uses a simple directory-based system to manage model run inputs and outputs, both for model application work and model development/calibration/validation work.
2. Visualization now: which tools do you currently use for visualizing model outputs? Do you have proprietary packages, agency-written scripts/tools, etc. Is any of it web-based?
ArcMap and QGIS is generally the primary tool used for presenting static model application inputs and outputs. However, we have been increasingly using consultant- and agency-developed open-source JavaScript/Leaflet-based interactive data visualizations for internal review of model inputs and outputs, as well as for public access and review of both model and other data.
3. Analysis needs: what do you feel is currently missing from your analyst visualization toolkit? Are there things you can't do at all that you need? Are there things you can do, but they are onerous or annoying or difficult?
The greatest need is to develop methods for spinning up new visualizations more quickly. In cases where some unique data or data analysis is being presented, it is understandable that it takes time and effort to conceptualize how to present data most effectively, especially if the data visualization is to be public-facing. However, there are many model input / output visualization needs that are not bespoke, such as visualizing and comparing land use inputs, skims, or mode choice forecasts. Ideally, we could have a collection or gallery of "standard" interactive visualizations that we select from, and populate simply by pointing to the directory / directories where standard model outputs can be found. These "standard" visualizations would allow some basic customizations around symbology such as breakpoints and colors.
4. Outward facing visualization: Stepping away from an internal analyst role and thinking about outward presentation, what would you need to help convey model outputs? What tools do you use to get from model outputs to Board, TRB presentations, and to the public?
Developing public-facing visualizations is qualitatively different than developing internal-facing visualizations. First, significant consideration and thought must be given to what data is being presented, how it is being presented, and the type of interactivity that is appropriate. While internal visualizations should provide a broad range of exploratory and presentation capabilities, public-facing visualizations, public-facing visualizations should help tell an unbiased story via a different (and in some cases) more limited set of interactive capabilities following the principle that "less is more." While we often continue to use more traditional GIS tools for static presentations to Board, TRB, etc, our ideal is when we can have a static map in a document that is derived directly from our on-line interactive tool, so that people can go to the link and dive into and start to interact with map / data.
5. At your agency, is a common viz toolkit something that would be added on top of existing tools that you already love, or more likely to be something that replaces existing workflows (if any)? We ask because the task clearly states that the consortium will coalesce around a common set of tools, and that will require changes for all of you!
We would love to have a common viz toolkit that can be flexibly be deployed for both internal and external applications and communications. However, we anticipate that there will continue to be the need to maintain existing tools which may provide features beyond simply visualization of geospatial data, such as the ability to perform transformations, projections, spatial joins, etc.